Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Unbiased Estimation of Estimator Variance.

Rozprawy doktorskie na temat „Unbiased Estimation of Estimator Variance”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 20 najlepszych rozpraw doktorskich naukowych na temat „Unbiased Estimation of Estimator Variance”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Kannappa, Sandeep Mavuduru. "Reduced Complexity Viterbi Decoders for SOQPSK Signals over Multipath Channels". International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/604300.

Pełny tekst źródła
Streszczenie:
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California
High data rate communication between airborne vehicles and ground stations over the bandwidth constrained Aeronautical Telemetry channel is attributed to the development of bandwidth efficient Advanced Range Telemetry (ARTM) waveforms. This communication takes place over a multipath channel consisting of two components - a line of sight and one or more ground reflected paths which result in frequency selective fading. We concentrate on the ARTM SOQPSKTG transmit waveform suite and decode information bits using the reduced complexity Viterbi algorithm. Two different methodologies are proposed to implement reduced complexity Viterbi decoders in multipath channels. The first method jointly equalizes the channel and decodes the information bits using the reduced complexity Viterbi algorithm while the second method utilizes the minimum mean square error equalizer prior to applying the Viterbi decoder. An extensive numerical study is performed in comparing the performance of the above methodologies. We also demonstrate the performance gain offered by our reduced complexity Viterbi decoders over the existing linear receiver. In the numerical study, both perfect and estimated channel state information are considered.
Style APA, Harvard, Vancouver, ISO itp.
2

Du, Jichang. "Covariate-matched estimator of the error variance in nonparametric regression". Diss., Online access via UMI:, 2007.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Carlsson, Martin. "Variance Estimation of the Calibration Estimator with Measurement Errors in the Auxiliary Information". Thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-68928.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Cardoso, João Nuno Martins. "Robust mean variance". Master's thesis, Instituto Superior de Economia e Gestão, 2015. http://hdl.handle.net/10400.5/10706.

Pełny tekst źródła
Streszczenie:
Mestrado em Finanças
Este estudo empírico tem como objectivo avaliar o impacto da estimação robusta nos portefólios de média variância. Isto foi conseguido fazendo uma simulação do comportamento de 15 acções do SP500. Esta simulação inclui dois cenários: um com amostras que seguem uma distribuição normal e outro com amostras contaminadas não normais. Cada cenário inclui 200 reamostragens. O performance dos portefólios estimados usando a máxima verosimilhança (clássicos) e dos portefólios estimados de forma robusta são comparados, resultando em algumas conclusões: Em amostras normais, portefólios robustos são marginalmente menos eficientes que os portefólios clássicos. Contudo, em amostras não normais, os portefólios robustos apresentam um performance muito superior que os portefólios clássicos. Este acréscimo de performance está positivamente correlacionado com o nível de contaminação da amostra. Em suma, assumindo que os retornos financeiros têm uma distribuição não normal, podemos afirmar que os estimadores robustos resultam em portefólios de média variância mais estáveis.
This empirical study's objective is to evaluate the impact of robust estimation on mean variance portfolios. This was accomplished by doing a simulation on the behavior of 15 SP500 stocks. This simulation includes two scenarios: One with normally distributed samples and another with contaminated non-normal samples. Each scenario includes 200 resamples. The performance of maximum likelihood (classical) estimated portfolios and robustly estimated portfolios are compared, resulting in some conclusions: On normally distributed samples, robust portfolios are marginally less efficient than classical portfolios. However, on non-normal samples, robust portfolios present a much higher performance than classical portfolios. This increase in performance is positively correlated with the level of contamination present on the sample. In summary, assuming that financial returns do not present a normal distribution, we can state that robust estimators result in more stable mean variance portfolios.
Style APA, Harvard, Vancouver, ISO itp.
5

Sadeghkhani, Abdolnasser. "Estimation d'une densité prédictive avec information additionnelle". Thèse, Université de Sherbrooke, 2017. http://hdl.handle.net/11143/11238.

Pełny tekst źródła
Streszczenie:
Dans le contexte de la théorie bayésienne et de théorie de la décision, l'estimation d'une densité prédictive d'une variable aléatoire occupe une place importante. Typiquement, dans un cadre paramétrique, il y a présence d’information additionnelle pouvant être interprétée sous forme d’une contrainte. Cette thèse porte sur des stratégies et des améliorations, tenant compte de l’information additionnelle, pour obtenir des densités prédictives efficaces et parfois plus performantes que d’autres données dans la littérature. Les résultats s’appliquent pour des modèles avec données gaussiennes avec ou sans une variance connue. Nous décrivons des densités prédictives bayésiennes pour les coûts Kullback-Leibler, Hellinger, Kullback-Leibler inversé, ainsi que pour des coûts du type $\alpha-$divergence et établissons des liens avec les familles de lois de probabilité du type \textit{skew--normal}. Nous obtenons des résultats de dominance faisant intervenir plusieurs techniques, dont l’expansion de la variance, les fonctions de coût duaux en estimation ponctuelle, l’estimation sous contraintes et l’estimation de Stein. Enfin, nous obtenons un résultat général pour l’estimation bayésienne d’un rapport de deux densités provenant de familles exponentielles.
Abstract: In the context of Bayesian theory and decision theory, the estimation of a predictive density of a random variable represents an important and challenging problem. Typically, in a parametric framework, usually there exists some additional information that can be interpreted as constraints. This thesis deals with strategies and improvements that take into account the additional information, in order to obtain effective and sometimes better performing predictive densities than others in the literature. The results apply to normal models with a known or unknown variance. We describe Bayesian predictive densities for Kullback--Leibler, Hellinger, reverse Kullback-Leibler losses as well as for α--divergence losses and establish links with skew--normal densities. We obtain dominance results using several techniques, including expansion of variance, dual loss functions in point estimation, restricted parameter space estimation, and Stein estimation. Finally, we obtain a general result for the Bayesian estimator of a ratio of two exponential family densities.
Style APA, Harvard, Vancouver, ISO itp.
6

Baba, Harra M'hammed. "Estimation de densités spectrales d'ordre élevé". Rouen, 1996. http://www.theses.fr/1996ROUES023.

Pełny tekst źródła
Streszczenie:
Dans cette thèse nous construisons des estimateurs de la densité spectrale du cumulant, pour un processus strictement homogène et centré, l'espace des temps étant l'espace multidimensionnel, euclidien réel ou l'espace multidimensionnel des nombres p-adiques. Dans cette construction nous avons utilisé la méthode de lissage de la trajectoire et un déplacement dans le temps ou la méthode de fenêtres spectrales. Sous certaines conditions de régularité, les estimateurs proposés sont asymptotiquement sans biais et convergents. Les procédures d'estimation exposées peuvent trouver des applications dans de nombreux domaines scientifiques et peuvent aussi fournir des éléments de réponse aux questions relatives à certaines propriétés statistiques des processus aléatoires.
Style APA, Harvard, Vancouver, ISO itp.
7

Naftali, Eran 1971. "First order bias and second order variance of the Maximum Likelihood Estimator with application to multivariate Gaussian data and time delay and Doppler shift estimation". Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/88334.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Harti, Mostafa. "Estimation robuste sous un modèle de contamination non symétrique et M-estimateur multidimensionnel". Nancy 1, 1986. http://www.theses.fr/1986NAN10063.

Pełny tekst źródła
Streszczenie:
Dans cette thèse nous étudions la robustesse des estimateurs sous les deux modèles de contamination non symétrique: F::(epsilon ),X=(1-epsilon )F::(theta )+epsilon H::(X) et F::(epsilon )=(1-epsilon )F::(theta )+epsilon G. Nous étudions aussi la robustesse des M-estimateurs multidimensionnels et en particulier les M-estimateurs de régression non linéaire pour lesquels nous établissons la normalité asymptotique
Style APA, Harvard, Vancouver, ISO itp.
9

Krishnan, Rajet. "Problems in distributed signal processing in wireless sensor networks". Thesis, Manhattan, Kan. : Kansas State University, 2009. http://hdl.handle.net/2097/1351.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Teixeira, Marcos Vinícius. "Estudos sobre a implementação online de uma técnica de estimação de energia no calorímetro hadrônico do atlas em cenários de alta luminosidade". Universidade Federal de Juiz de Fora (UFJF), 2015. https://repositorio.ufjf.br/jspui/handle/ufjf/4169.

Pełny tekst źródła
Streszczenie:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-04-25T13:40:30Z No. of bitstreams: 1 marcosviniciusteixeira.pdf: 5877294 bytes, checksum: 8fe056549285d49782c2d9ec8e16f786 (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-04-25T15:26:43Z (GMT) No. of bitstreams: 1 marcosviniciusteixeira.pdf: 5877294 bytes, checksum: 8fe056549285d49782c2d9ec8e16f786 (MD5)
Made available in DSpace on 2017-04-25T15:26:43Z (GMT). No. of bitstreams: 1 marcosviniciusteixeira.pdf: 5877294 bytes, checksum: 8fe056549285d49782c2d9ec8e16f786 (MD5) Previous issue date: 2015-08-21
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Este trabalho tem como objetivo o estudo de técnicas para a estimação da amplitude de sinais no calorímetro de telhas (TileCal) do ATLAS no LHC em cenários de alta luminosidade. Em alta luminosidade, sinais provenientes de colisões adjacentes são observados, ocasionando o efeito de empilhamento de sinais. Neste ambiente, o método COF (do inglês, Constrained Optimal Filter), apresenta desempenho superior ao algoritmo atualmente implementado no sistema. Entretanto, o COF requer a inversão de matrizes para o cálculo da pseudo-inversa de uma matriz de convolução, dificultando sua implementação online. Para evitar a inversão de matrizes, este trabalho apresenta métodos interativos, para a daptação do COF, que resultam em operações matemáticas simples. Baseados no Gradiente Descendente, os resultados demonstraram que os algoritmos são capazes de estimar a amplitude de sinais empilhados, além do sinal de interesse com eficiência similar ao COF. Visando a implementação online, este trabalho apresenta estudos sobre a complexidade dos métodos iterativos e propõe uma arquitetura de processamento em FPGA. Baseado em uma estrutura sequencial e utilizando lógica aritmética em ponto fixo, os resultados demonstraram que a arquitetura desenvolvida é capaz executar o método iterativo, atendendo os requisitos de tempo de processamento exigidos no TileCal.
This work aims at the study of techniques for online energy estimation in the ATLAS hadronic Calorimeter (TileCal) on the LHC collider. During further periods of the LHC operation, signals coming from adjacent collisions will be observed within the same window, producing a signal superposition. In this environment, the energy reconstruction method COF (Constrained Optimal Filter) outperforms the algorithm currently implemented in the system. However , the COF method requires an inversion of matrices and its online implementation is not feasible. To avoid such inversion of matrices, this work presents iteractive methods to implement the COF, resulting in simple mathematical operations. Based on the Gradient Descent, the results demonstrate that the algorithms are capable of estimating the amplitude of the superimposed signals with efficiency similar to COF. In addition, a processing architecture for FPGA implementation is proposed. The analysis has shown that the algorithms can be implemented in the new TilaCal electronics, reaching the processing time requirements.
Style APA, Harvard, Vancouver, ISO itp.
11

Lardin, Pauline. "Estimation de synchrones de consommation électrique par sondage et prise en compte d'information auxiliaire". Phd thesis, Université de Bourgogne, 2012. http://tel.archives-ouvertes.fr/tel-00842199.

Pełny tekst źródła
Streszczenie:
Dans cette thèse, nous nous intéressons à l'estimation de la synchrone de consommation électrique (courbe moyenne). Etant donné que les variables étudiées sont fonctionnelles et que les capacités de stockage sont limitées et les coûts de transmission élevés, nous nous sommes intéressés à des méthodes d'estimation par sondage, alternatives intéressantes aux techniques de compression du signal. Nous étendons au cadre fonctionnel des méthodes d'estimation qui prennent en compte l'information auxiliaire disponible afin d'améliorer la précision de l'estimateur de Horvitz-Thompson de la courbe moyenne de consommation électrique. La première méthode fait intervenir l'information auxiliaire au niveau de l'estimation, la courbe moyenne est estimée à l'aide d'un estimateur basé sur un modèle de régression fonctionnelle. La deuxième l'utilise au niveau du plan de sondage, nous utilisons un plan à probabilités inégales à forte entropie puis l'estimateur de Horvitz-Thompson fonctionnel. Une estimation de la fonction de covariance est donnée par l'extension au cadre fonctionnel de l'approximation de la covariance donnée par Hájek. Nous justifions de manière rigoureuse leur utilisation par une étude asymptotique. Pour chacune de ces méthodes, nous donnons, sous de faibles hypothèses sur les probabilités d'inclusion et sur la régularité des trajectoires, les propriétés de convergence de l'estimateur de la courbe moyenne ainsi que de sa fonction de covariance. Nous établissons également un théorème central limite fonctionnel. Afin de contrôler la qualité de nos estimateurs, nous comparons deux méthodes de construction de bande de confiance sur un jeu de données de courbes de charge réelles. La première repose sur la simulation de processus gaussiens. Une justification asymptotique de cette méthode sera donnée pour chacun des estimateurs proposés. La deuxième utilise des techniques de bootstrap qui ont été adaptées afin de tenir compte du caractère fonctionnel des données
Style APA, Harvard, Vancouver, ISO itp.
12

Öhman, Marie-Louise. "Aspects of analysis of small-sample right censored data using generalized Wilcoxon rank tests". Doctoral thesis, Umeå universitet, Statistiska institutionen, 1994. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-7313.

Pełny tekst źródła
Streszczenie:
The estimated bias and variance of commonly applied and jackknife variance estimators and observed significance level and power of standardised generalized Wilcoxon linear rank sum test statistics and tests, respectively, of Gehan and Prentice are compared in a Monte Carlo simulation study. The variance estimators are the permutational-, the conditional permutational- and the jackknife variance estimators of the test statistic of Gehan, and the asymptotic- and the jackknife variance estimators of the test statistic of Prentice. In unbalanced small sample size problems with right censoring, the commonly applied variance estimators for the generalized Wilcoxon rank test statistics of Gehan and Prentice may be biased. In the simulation study it appears that variance properties and observed level and power may be improved by using the jackknife variance estimator. To establish the sensitivity to gross errors and misclassifications for standardised generalized Wilcoxon linear rank sum statistics in small samples with right censoring, the sensitivity curves of Tukey are used. For a certain combined sample, which might contain gross errors, a relatively simple method is needed to establish the applicability of the inference drawn from the selected rank test. One way is to use the change of decision point, which in this thesis is defined as the smallest proportion of altered positions resulting in an opposite decision. When little is known about the shape of a distribution function, non-parametric estimates for the location parameter are found by making use of censored one-sample- and two-sample rank statistics. Methods for constructing censored small sample confidence intervals and asymptotic confidence intervals for a location parameter are also considered. Generalisations of the solutions from uncensored one-sample and two-sample rank tests are utilised. A Monte-Carlo simulation study indicates that rank estimators may have smaller absolute estimated bias and smaller estimated mean squared error than a location estimator derived from the Product-Limit estimator of the survival distribution function. The ideas described and discussed are illustrated with data from a clinical trial of Head and Neck cancer.
digitalisering@umu
Style APA, Harvard, Vancouver, ISO itp.
13

Xu, Fu-Min, i 許富閔. "Assessment of the minimum variance unbiased estimator for evaluation of average bioequivalence". Thesis, 2004. http://ndltd.ncl.edu.tw/handle/79829770556256571089.

Pełny tekst źródła
Streszczenie:
碩士
國立成功大學
統計學系碩博士班
92
The research and development of an innovative drug product in the average take 10-12 years and US $ 800 million dollars. Therefore, it is a costly, time-consuming, and highly risky endeavor. One way to reduce the drug cost is to introduce generic drugs after the patent of the innovative drugs expires. Currently, most regulatory agencies in the world only require evidence of average bioequivalence from in vivo bioequivalence trials to approve the generic drugs.   Currently, maximum likelihood estimator (MLE) is recommended for evaluation of average bioequivalence. However, we considered to adopt the minimum variance unbiased estimator (MVUE) to assess the average bioequivalence. We performed a simulation study to compare the bias, mean square error, empirical size, empirical power and 90% confidence coefficient between MLE and MVUE on the various combinations of parameters and sample size under 2 2 crossover design and higher-order crossover design.
Style APA, Harvard, Vancouver, ISO itp.
14

Gatarayiha, Jean Philippe. "Méthode de simulation avec les variables antithétiques". Thèse, 2007. http://hdl.handle.net/1866/9923.

Pełny tekst źródła
Streszczenie:
Dans ce mémoire, nous travaillons sur une méthode de simulation de Monte-Carlo qui utilise des variables antithétiques pour estimer un intégrale de la fonction f(x) sur un intervalle (0,1] où f peut être une fonction monotone, non-monotone ou une autre fonction difficile à simuler. L'idée principale de la méthode qu'on propose est de subdiviser l'intervalle (0,1] en m sections dont chacune est subdivisée en l sous intervalles. Cette technique se fait en plusieurs étapes et à chaque fois qu'on passe à l'étape supérieure la variance diminue. C'est à dire que la variance obtenue à la kième étape est plus petite que celle trouvée à la (k-1)ième étape ce qui nous permet également de rendre plus petite l'erreur d’estimation car l'estimateur de l'intégrale de f(x) sur [0,1] est sans biais. L'objectif est de trouver m, le nombre optimal de sections, qui permet de trouver cette diminution de la variance.
In this master thesis, we consider simulation methods based on antithetic variates for estimate integrales of f(x) on interval (0,1] where f is monotonic function, not a monotonic function or a function difficult to integrate. The main idea consists in subdividing the (0,1] in m sections of which each one is subdivided in l subintervals. This method is done recursively. At each step the variance decreases, i.e. The variance obtained at the kth step is smaller than that is found at the (k-1)th step. This allows us to reduce the error in the estimation because the estimator of integrales of f(x) on interval [0,1] is unbiased. The objective is to optimize m.
Les fichiers qui accompagnent mon document ont été réalisés avec le logiciel Latex et les simulations ont été réalisés par Splus(R).
Style APA, Harvard, Vancouver, ISO itp.
15

Henderson, Tamie, i Tamie Anakotta. "Estimating the variance of the Horvitz-Thompson estimator". Thesis, 2006. http://hdl.handle.net/1885/10608.

Pełny tekst źródła
Streszczenie:
Unequal probability sampling was introduced by Hansen and Hurwitz (1943) as a means of reducing the mean squared errors of survey estimators. For simplicity they used sampling with replacement only. Horvitz and Thompson (1952) extended this methodology to sampling without replacement, however the knowledge of the joint inclusion probabilities of all pairs of sample units was required for the variance estimation process. The calculation of these probabilities is typically problematic. Sen (1953) and Yates and Grundy (1953) independently suggested the use fixed, but this estimator again involved the calculation of the joint inclusion probabilities. This requirement has proved to be a substantial disincentive to its use. More recently, efforts have been made to find useful approximations to this fixed-size sample variance, which would avoid the need to evaluate the joint inclusion probabilities. These approximate variance estimators have been shown to perform well under high entropy sampling designs, however, there is now an ongoing dispute in the literature regarding the preferred approximate estimator. This thesis examines in detail nine of these approximate estimators, and their empirical performances under two high entropy sampling designs, namely Conditional Poisson Sampling and Randomised Systematic Sampling. These nine approximate estimators were separated into two families based on their variance formulae. It was hypothesised, due to the derivation of these variance estimators, that one family would perform better under Randomised Systematic Sampling and the other under Conditional Poisson Sampling. The two families of approximate variance estimators showed within group similarities, and they usually performed better under their respective sampling designs. Recently algorithms have been derived to efficiently determine the exact joint inclusion probabilities under Conditional Poisson Sampling. As a result, this study compared the Sen-Yates-Grundy variance estimator to the other approximate estimators to determine whether the knowledge of these probabilities could improve the estimation process. This estimator was found to avoid serious inaccuracies more consistently than the nine approximate estimators, but perhaps not to the extent that would justify its routine use, as it also produced estimates of variance with consistently higher mean squared errors than the approximate variance estimators. The results of the more recent published papers, Matei and Till´e (2005), have been shown to be largely misleading. This study also shows that the relationship between the variance and the entropy of the sampling design is more complex than was originally supposed by Brewer and Donadio (2003). Finally, the search for a best all-round variance estimator has been somewhat inconclusive, but it has been possible to indicate which estimators are likely to perform well in certain definable circumstances.
Style APA, Harvard, Vancouver, ISO itp.
16

Boulanger, Laurence. "Comparaison d'estimateurs de la variance du TMLE". Thèse, 2018. http://hdl.handle.net/1866/22542.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Krishnan, Sunder Ram. "Optimum Savitzky-Golay Filtering for Signal Estimation". Thesis, 2013. http://etd.iisc.ac.in/handle/2005/3293.

Pełny tekst źródła
Streszczenie:
Motivated by the classic works of Charles M. Stein, we focus on developing risk-estimation frameworks for denoising problems in both one-and two-dimensions. We assume a standard additive noise model, and formulate the denoising problem as one of estimating the underlying clean signal from noisy measurements by minimizing a risk corresponding to a chosen loss function. Our goal is to incorporate perceptually-motivated loss functions wherever applicable, as in the case of speech enhancement, with the squared error loss being considered for the other scenarios. Since the true risks are observed to depend on the unknown parameter of interest, we circumvent the roadblock by deriving finite-sample un-biased estimators of the corresponding risks based on Stein’s lemma. We establish the link with the multivariate parameter estimation problem addressed by Stein and our denoising problem, and derive estimators of the oracle risks. In all cases, optimum values of the parameters characterizing the denoising algorithm are determined by minimizing the Stein’s unbiased risk estimator (SURE). The key contribution of this thesis is the development of a risk-estimation approach for choosing the two critical parameters affecting the quality of nonparametric regression, namely, the order and bandwidth/smoothing parameters. This is a classic problem in statistics, and certain algorithms relying on derivation of suitable finite-sample risk estimators for minimization have been reported in the literature (note that all these works consider the mean squared error (MSE) objective). We show that a SURE-based formalism is well-suited to the regression parameter selection problem, and that the optimum solution guarantees near-minimum MSE (MMSE) performance. We develop algorithms for both glob-ally and locally choosing the two parameters, the latter referred to as spatially-adaptive regression. We observe that the parameters are so chosen as to tradeoff the squared bias and variance quantities that constitute the MSE. We also indicate the advantages accruing out of incorporating a regularization term in the cost function in addition to the data error term. In the more general case of kernel regression, which uses a weighted least-squares (LS) optimization, we consider the applications of image restoration from very few random measurements, in addition to denoising of uniformly sampled data. We show that local polynomial regression (LPR) becomes a special case of kernel regression, and extend our results for LPR on uniform data to non-uniformly sampled data also. The denoising algorithms are compared with other standard, performant methods available in the literature both in terms of estimation error and computational complexity. A major perspective provided in this thesis is that the problem of optimum parameter choice in nonparametric regression can be viewed as the selection of optimum parameters of a linear, shift-invariant filter. This interpretation is provided by deriving motivation out of the hallmark paper of Savitzky and Golay and Schafer’s recent article in IEEE Signal Processing Magazine. It is worth noting that Savitzky and Golay had shown in their original Analytical Chemistry journal article, that LS fitting of a fixed-order polynomial over a neighborhood of fixed size is equivalent to convolution with an impulse response that is fixed and can be pre-computed. They had provided tables of impulse response coefficients for computing the smoothed function and smoothed derivatives for different orders and neighborhood sizes, the resulting filters being referred to as Savitzky-Golay (S-G) filters. Thus, we provide the new perspective that the regression parameter choice is equivalent to optimizing for the filter impulse response length/3dB bandwidth, which are inversely related. We observe that the MMSE solution is such that the S-G filter chosen is of longer impulse response length (equivalently smaller cutoff frequency) at relatively flat portions of the noisy signal so as to smooth noise, and vice versa at locally fast-varying portions of the signal so as to capture the signal patterns. Also, we provide a generalized S-G filtering viewpoint in the case of kernel regression. Building on the S-G filtering perspective, we turn to the problem of dynamic feature computation in speech recognition. We observe that the methodology employed for computing dynamic features from the trajectories of static features is in fact derivative S-G filtering. With this perspective, we note that the filter coefficients can be pre-computed, and that the whole problem of delta feature computation becomes efficient. Indeed, we observe an advantage by a factor of 104 on making use of S-G filtering over actual LS polynomial fitting and evaluation. Thereafter, we study the properties of first-and second-order derivative S-G filters of certain orders and lengths experimentally. The derivative filters are bandpass due to the combined effects of LPR and derivative computation, which are lowpass and highpass operations, respectively. The first-and second-order S-G derivative filters are also observed to exhibit an approximately constant-Q property. We perform a TIMIT phoneme recognition experiment comparing the recognition accuracies obtained using S-G filters and the conventional approach followed in HTK, where Furui’s regression formula is made use of. The recognition accuracies for both cases are almost identical, with S-G filters of certain bandwidths and orders registering a marginal improvement. The accuracies are also observed to improve with longer filter lengths, for a particular order. In terms of computation latency, we note that S-G filtering achieves delta and delta-delta feature computation in parallel by linear filtering, whereas they need to be obtained sequentially in case of the standard regression formulas used in the literature. Finally, we turn to the problem of speech enhancement where we are interested in de-noising using perceptually-motivated loss functions such as Itakura-Saito (IS). We propose to perform enhancement in the discrete cosine transform domain using risk-minimization. The cost functions considered are non-quadratic, and derivation of the unbiased estimator of the risk corresponding to the IS distortion is achieved using an approximate Taylor-series analysis under high signal-to-noise ratio assumption. The exposition is general since we focus on an additive noise model with the noise density assumed to fall within the exponential class of density functions, which comprises most of the common densities. The denoising function is assumed to be pointwise linear (modified James-Stein (MJS) estimator), and parallels between Wiener filtering and the optimum MJS estimator are discussed.
Style APA, Harvard, Vancouver, ISO itp.
18

Krishnan, Sunder Ram. "Optimum Savitzky-Golay Filtering for Signal Estimation". Thesis, 2013. http://hdl.handle.net/2005/3293.

Pełny tekst źródła
Streszczenie:
Motivated by the classic works of Charles M. Stein, we focus on developing risk-estimation frameworks for denoising problems in both one-and two-dimensions. We assume a standard additive noise model, and formulate the denoising problem as one of estimating the underlying clean signal from noisy measurements by minimizing a risk corresponding to a chosen loss function. Our goal is to incorporate perceptually-motivated loss functions wherever applicable, as in the case of speech enhancement, with the squared error loss being considered for the other scenarios. Since the true risks are observed to depend on the unknown parameter of interest, we circumvent the roadblock by deriving finite-sample un-biased estimators of the corresponding risks based on Stein’s lemma. We establish the link with the multivariate parameter estimation problem addressed by Stein and our denoising problem, and derive estimators of the oracle risks. In all cases, optimum values of the parameters characterizing the denoising algorithm are determined by minimizing the Stein’s unbiased risk estimator (SURE). The key contribution of this thesis is the development of a risk-estimation approach for choosing the two critical parameters affecting the quality of nonparametric regression, namely, the order and bandwidth/smoothing parameters. This is a classic problem in statistics, and certain algorithms relying on derivation of suitable finite-sample risk estimators for minimization have been reported in the literature (note that all these works consider the mean squared error (MSE) objective). We show that a SURE-based formalism is well-suited to the regression parameter selection problem, and that the optimum solution guarantees near-minimum MSE (MMSE) performance. We develop algorithms for both glob-ally and locally choosing the two parameters, the latter referred to as spatially-adaptive regression. We observe that the parameters are so chosen as to tradeoff the squared bias and variance quantities that constitute the MSE. We also indicate the advantages accruing out of incorporating a regularization term in the cost function in addition to the data error term. In the more general case of kernel regression, which uses a weighted least-squares (LS) optimization, we consider the applications of image restoration from very few random measurements, in addition to denoising of uniformly sampled data. We show that local polynomial regression (LPR) becomes a special case of kernel regression, and extend our results for LPR on uniform data to non-uniformly sampled data also. The denoising algorithms are compared with other standard, performant methods available in the literature both in terms of estimation error and computational complexity. A major perspective provided in this thesis is that the problem of optimum parameter choice in nonparametric regression can be viewed as the selection of optimum parameters of a linear, shift-invariant filter. This interpretation is provided by deriving motivation out of the hallmark paper of Savitzky and Golay and Schafer’s recent article in IEEE Signal Processing Magazine. It is worth noting that Savitzky and Golay had shown in their original Analytical Chemistry journal article, that LS fitting of a fixed-order polynomial over a neighborhood of fixed size is equivalent to convolution with an impulse response that is fixed and can be pre-computed. They had provided tables of impulse response coefficients for computing the smoothed function and smoothed derivatives for different orders and neighborhood sizes, the resulting filters being referred to as Savitzky-Golay (S-G) filters. Thus, we provide the new perspective that the regression parameter choice is equivalent to optimizing for the filter impulse response length/3dB bandwidth, which are inversely related. We observe that the MMSE solution is such that the S-G filter chosen is of longer impulse response length (equivalently smaller cutoff frequency) at relatively flat portions of the noisy signal so as to smooth noise, and vice versa at locally fast-varying portions of the signal so as to capture the signal patterns. Also, we provide a generalized S-G filtering viewpoint in the case of kernel regression. Building on the S-G filtering perspective, we turn to the problem of dynamic feature computation in speech recognition. We observe that the methodology employed for computing dynamic features from the trajectories of static features is in fact derivative S-G filtering. With this perspective, we note that the filter coefficients can be pre-computed, and that the whole problem of delta feature computation becomes efficient. Indeed, we observe an advantage by a factor of 104 on making use of S-G filtering over actual LS polynomial fitting and evaluation. Thereafter, we study the properties of first-and second-order derivative S-G filters of certain orders and lengths experimentally. The derivative filters are bandpass due to the combined effects of LPR and derivative computation, which are lowpass and highpass operations, respectively. The first-and second-order S-G derivative filters are also observed to exhibit an approximately constant-Q property. We perform a TIMIT phoneme recognition experiment comparing the recognition accuracies obtained using S-G filters and the conventional approach followed in HTK, where Furui’s regression formula is made use of. The recognition accuracies for both cases are almost identical, with S-G filters of certain bandwidths and orders registering a marginal improvement. The accuracies are also observed to improve with longer filter lengths, for a particular order. In terms of computation latency, we note that S-G filtering achieves delta and delta-delta feature computation in parallel by linear filtering, whereas they need to be obtained sequentially in case of the standard regression formulas used in the literature. Finally, we turn to the problem of speech enhancement where we are interested in de-noising using perceptually-motivated loss functions such as Itakura-Saito (IS). We propose to perform enhancement in the discrete cosine transform domain using risk-minimization. The cost functions considered are non-quadratic, and derivation of the unbiased estimator of the risk corresponding to the IS distortion is achieved using an approximate Taylor-series analysis under high signal-to-noise ratio assumption. The exposition is general since we focus on an additive noise model with the noise density assumed to fall within the exponential class of density functions, which comprises most of the common densities. The denoising function is assumed to be pointwise linear (modified James-Stein (MJS) estimator), and parallels between Wiener filtering and the optimum MJS estimator are discussed.
Style APA, Harvard, Vancouver, ISO itp.
19

Béliveau, Audrey. "Estimation simplifiée de la variance dans le cas de l’échantillonnage à deux phases". Thèse, 2011. http://hdl.handle.net/1866/6279.

Pełny tekst źródła
Streszczenie:
Dans ce mémoire, nous étudions le problème de l'estimation de la variance pour les estimateurs par double dilatation et de calage pour l'échantillonnage à deux phases. Nous proposons d'utiliser une décomposition de la variance différente de celle habituellement utilisée dans l'échantillonnage à deux phases, ce qui mène à un estimateur de la variance simplifié. Nous étudions les conditions sous lesquelles les estimateurs simplifiés de la variance sont valides. Pour ce faire, nous considérons les cas particuliers suivants : (1) plan de Poisson à la deuxième phase, (2) plan à deux degrés, (3) plan aléatoire simple sans remise aux deux phases, (4) plan aléatoire simple sans remise à la deuxième phase. Nous montrons qu'une condition cruciale pour la validité des estimateurs simplifiés sous les plans (1) et (2) consiste à ce que la fraction de sondage utilisée pour la première phase soit négligeable (ou petite). Nous montrons sous les plans (3) et (4) que, pour certains estimateurs de calage, l'estimateur simplifié de la variance est valide lorsque la fraction de sondage à la première phase est petite en autant que la taille échantillonnale soit suffisamment grande. De plus, nous montrons que les estimateurs simplifiés de la variance peuvent être obtenus de manière alternative en utilisant l'approche renversée (Fay, 1991 et Shao et Steel, 1999). Finalement, nous effectuons des études par simulation dans le but d'appuyer les résultats théoriques.
In this thesis we study the problem of variance estimation for the double expansion estimator and the calibration estimators in the case of two-phase designs. We suggest to use a variance decomposition different from the one usually used in two-phase sampling, which leads to a simplified variance estimator. We look for the necessary conditions for the simplified variance estimators to be appropriate. In order to do so, we consider the following particular cases : (1) Poisson design at the second phase, (2) two-stage design, (3) simple random sampling at each phase, (4) simple random sampling at the second phase. We show that a crucial condition for the simplified variance estimator to be valid in cases (1) and (2) is that the first phase sampling fraction must be negligible (or small). We also show in cases (3) and (4) that the simplified variance estimator can be used with some calibration estimators when the first phase sampling fraction is negligible and the population size is large enough. Furthermore, we show that the simplified estimators can be obtained in an alternative way using the reversed approach (Fay, 1991 and Shao and Steel, 1999). Finally, we conduct some simulation studies in order to validate the theoretical results.
Style APA, Harvard, Vancouver, ISO itp.
20

(13991187), Joseph W. Daley. "Mixed model methods for quantitative trait loci estimation in crosses between outbred lines". Thesis, 2003. https://figshare.com/articles/thesis/Mixed_model_methods_for_quantitative_trait_loci_estimation_in_crosses_between_outbred_lines/21376767.

Pełny tekst źródła
Streszczenie:

Methodology is developed for Quantitative Trait Loci (QTL) analysis in F2 and backcross designed experiments between outbred lines using a mixed model framework through the modification of segment mapping techniques. Alleles are modelled in the F1 and parental generations allowing the estimation of individual additive allele effects while accounting for QTL segregation within lines as well as differences in mean QTL effects between lines.

Initially the theory, called F1 origin mapping, is developed for a single trait scenario involving possible multiple QTL and polygenic variation. Additive genetic variances are estimated via Restricted Maximum Likelihood (REML) and allele effects are modelled using Best Linear Unbiased Prediction (BLUP). Simulation studies are carried out comparing F1 origin mapping with existing segment mapping methods in a number of genetic scenarios. While there was no significant difference in the estimation of effects between the two methods the average CPU time of one hundred replicates was 0.26 seconds for F1 origin mapping and 3.77 seconds for the segment mapping method. This improvement in computation efficiency is due to the restructuring of IBD matrices which result in the inversion and REML iteration over much smaller matrices.

Further theory is developed which extends F1 origin mapping from single to multiple trait scenarios for F2 crosses between outbred lines. A bivariate trait is simulated using a single QTL with and without a polygenic component. A single trait and bivariate trait analysis are performed to compare the two approaches. There was no significant difference in the estimation of QTL effects between the two approaches. However, there was a slight improvement in the accuracy of QTL position estimates in the multiple trait approach. The advantage of F1 origin mapping with regard to computational efficiency becomes even more important with multiple trait analysis and allows the investigation of interesting biological models of gene expression.

F1 origin mapping is developed further to model the correlation structure inherent in repeated measures data collected on F2 crosses between outbred lines. A study was conducted to show that repeated measures F1 origin mapping and multiple trait F1 origin mapping give similar results in certain circumstances. Another simulation study was also conducted in which five regular repeated measures where simulated with allele breed difference effects and allele variances increasing linearly over time. Various polynomial orders of fit where investigated with the linear order of fit most parsimoniously modelling the data. The linear order of fit correctly identified the increasing trend in both the additive allele difference and allele variance. Repeated measures F1 origin mapping possesses the benefits of using the correlated nature of repeated measures while increasing the efficiency of QTL parameter estimation. Hence, it would be useful for QTL studies on measurements such as milk yield or live weights when collected at irregular intervals.

Theory is developed to combine the data from QTL studies involving F2 and backcross designed experiments. Genetic covariance matrices are developed for random QTL effects by modelling allele variation in the parental generation instead of the offspring generation for an F2 and backcross between outbred lines. The result is a general QTL estimation method called parental origin mapping. Phenotypes and genotypes from such a study involving Romney and Merino sheep are analysed providing evidence for a QTL affecting adult and hogget fibre diameter.

By coupling these new methods with computer software programs such as ASREML, F1 origin mapping and parental origin mapping provide powerful and flexible tools for QTL studies with the ability to efficiently handle single traits, multiple traits and repeated measures.

Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii