To see the other types of publications on this topic, follow the link: Robust methods.

Dissertations / Theses on the topic 'Robust methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Robust methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Peel, Vincent Robert. "Robust methods for robust passive sonar." Thesis, University of Southampton, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.305876.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Helmersson, Anders. "Methods for robust gain scheduling." Doctoral thesis, Linköpings universitet, Reglerteknik, 1995. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-75513.

Full text
Abstract:
Regulatorer används i många tillämpningar för att förbättra prestanda ochegenskaper hos fordon och farkoster av olika slag, t ex flygplan och raketer.Beroende på olika faktorer, som hastighet och höjd, kan dynamiken ochuppträdandet av dessa farkoster variera. Man vill därför låta regulatorn tahänsyn till dessa variationer genom att ändra sitt uppförande i motsvarandegrad. Regulatorn kan konstrueras på olika sätt. En möjlighet är att styrsignalernaberor linjärt eller proportionellt på regulatorns insignaler, till exempelavvikelser från önskad bana. Regulatorn bestäms av ett antal parametrarsom kan ändras efter de yttre betingelserna (t ex hastighet och höjd för ett flygplan). Denna metod kallas för parameterstyrning. Vid konstruktionen av regulatorn använder man en modell av det systemsom ska styras. En viktig och önskad egenskap hos en regulator är att denska kunna fungera bra även om systemet den ska styra varierar eller avvikerfrån modellen. Ett flygplan beter sig på olika sätt beroende på hur mycketlast den har och hur lasten är placerad. Det är därför önskvärt att användaregulatorer som inte är känsliga för variationer, t ex i last, och som uppförsig bra i olika situationer. En sådan regulator sägs vara robust. Avhandlingen behandlar hur man kan analysera och konstruera regulatorersom är robusta och parameterstyrda. Det visar sig att båda dessaproblem är likartade och att de kan behandlas med samma metoder.
This thesis considers the analysis of systems with uncertainties and the design of controllers to such systems. Uncertainties are treated in a relatively broad sense covering gain-bounded elements that are not known a priori but could be available to the controller in real time. The uncertainties are in the most general case norm-bounded operators with a given block-diagonal structure. The structure includes parameters, linear time-invariant and time-varying systems as well as nonlinearities. In some applications the controller may have access to the uncertainty, e.g. a parameter that depends on some known condition. There exist well-known methods for determining stability of systems subject to uncertainties. This thesis is within the framework for structured singular values also denoted by μ.  Given a certain class of uncertainties, μ is the inverse of the size of the smallest uncertainty that causes the system to become unstable. Thus, μ is a measure of the system's "structured gain".  In general it is not possible to compute μ exactly, but an upper bound can be determined using efficient numerical methods based on linear matrix inequalities. An essential contribution in this thesis is a new synthesis algorithm for finding controllers when parametric (real) uncertainties are present. This extends previous results on μ synthesis involving dynamic (complex) uncertainties.  Specifically, we can design gain scheduling controllers using the new μ synthesis theorem, with less conservativeness than previous methods. Also, algorithms for model reduction of uncertainty systems are given. A gain scheduling controller is a linear regulator whose parameters are changed as a function of the varying operating conditions. By treating nonlinearities as uncertainties, μ methods can be used in gain scheduling design. In the discussion, emphasis is put on how to take into consideration different characteristics of the time-varying properties of the system to be controlled. Also robustness and its relation with gain scheduling are treated. In order to handle systems with time-invariant uncertainties, both linear systems and constant parameters, a set of scalings and multipliers are introduced. These are matched to the properties of the uncertainties. Also, multipliers for treating uncertainties that are slowly varying, such that the rate of change is bounded, are introduced. Using these multipliers the applicability of the analysis and synthesis results are greatly extended.
APA, Harvard, Vancouver, ISO, and other styles
3

Nargis, Suraiya, and n/a. "Robust methods in logistic regression." University of Canberra. Information Sciences & Engineering, 2005. http://erl.canberra.edu.au./public/adt-AUC20051111.141200.

Full text
Abstract:
My Masters research aims to deepen our understanding of the behaviour of robust methods in logistic regression. Logistic regression is a special case of Generalized Linear Modelling (GLM), which is a powerful and popular technique for modelling a large variety of data. Robust methods are useful in reducing the effect of outlying values in the response variable on parameter estimates. A literature survey shows that we are still at the beginning of being able to detect extreme observations in logistic regression analyses, to apply robust methods in logistic regression and to present informatively the results of logistic regression analyses. In Chapter 1 I have made a basic introduction to logistic regression, with an example, and to robust methods in general. In Chapters 2 through 4 of the thesis I have described traditional methods and some relatively new methods for presenting results of logistic regression using powerful visualization techniques as well as the concepts of outliers in binomial data. I have used different published data sets for illustration, such as the Prostate Cancer data set, the Damaged Carrots data set and the Recumbent Cow data set. In Chapter 4 I summarize and report on the modem concepts of graphical methods, such as central dimension reduction, and the use of graphics as pioneered by Cook and Weisberg (1999). In Section 4.6 I have then extended the work of Cook and Weisberg to robust logistic regression. In Chapter 5 I have described simulation studies to investigate the effects of outlying observations on logistic regression (robust and non-robust). In Section 5.2 I have come to the conclusion that, in the case of classical or robust multiple logistic regression with no outliers, robust methods do not necessarily provide more reasonable estimates of the parameters for the data that contain no st~ong outliers. In Section 5.4 I have looked into the cases where outliers are present and have come to the conclusion that either the breakdown method or a sensitivity analysis provides reasonable parameter estimates in that situation. Finally, I have identified areas for further study.
APA, Harvard, Vancouver, ISO, and other styles
4

Mutapcic, Almir. "Robust optimization : methods and applications /." May be available electronically:, 2008. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Naish-Guzman, Andrew Guillermo Peter. "Sparse and robust kernel methods." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mwitondi, K. S. "Robust methods in data mining." Thesis, University of Leeds, 2003. http://etheses.whiterose.ac.uk/807/.

Full text
Abstract:
The thesis focuses on two problems in Data Mining, namely clustering, an exploratory technique to group observations in similar groups, and classification, a technique used to assign new observations to one of the known groups. A thorough study of the two problems, which are also known in the Machine Learning literature as unsupervised and supervised classification respectively, is central to decision making in different fields - the thesis seeks to contribute towards that end. In the first part of the thesis we consider whether robust methods can be applied to clustering - in particular, we perform clustering on fuzzy data using two methods originally developed for outlier-detection. The fuzzy data clusters are characterised by two intersecting lines such that points belonging to the same cluster lie close to the same line. This part of the thesis also investigates a new application of finite mixture of normals to the fuzzy data problem. The second part of the thesis addresses issues relating to classification - in particular, classification trees and boosting. The boosting algorithm is a relative newcomer to the classification portfolio that seeks to enhance the performance of classifiers by iteratively re-weighting the data according to their previous classification status. We explore the performance of "boosted" trees (mainly stumps) based on 3 different models all characterised by a sine-wave boundary. We also carry out a thorough study of the factors that affect the boosting algorithm. Other results include a new look at the concept of randomness in the classification context, particularly because the form of randomness in both training and testing data has directly affects the accuracy and reliability of domain- partitioning rules. Further, we provide statistical interpretations of some of the classification-related concepts, originally used in Computer Science, Machine Learning and Artificial Intelligence. This is important since there exists a need for a unified interpretation of some of the "landmark" concepts in various disciplines, as a step forward towards seeking the principles that can guide and strengthen practical applications.
APA, Harvard, Vancouver, ISO, and other styles
7

Huang, Shu-Pang. "ROBUST METHODS FOR ESTIMATING ALLELE FREQUENCIES." NCSU, 2001. http://www.lib.ncsu.edu/theses/available/etd-20010614-213208.

Full text
Abstract:

HUANG, SHU-PANG. ROBUST METHODS FOR ESTIMATING ALLELE FREQUENCIES (Advisor: Bruce S. Weir) The distribution of allele frequencies has beena major focus in population genetics. Classical approaches usingstochastic arguments depend highly on the choice of mutationmodel. Unfortunately, it is hard to justify which mutation modelis suitable for a particular sample. We propose two methods toestimate allele frequencies, especially for rare alleles, withoutassuming a mutation model. The first method achieves its goalthrough two steps. First it estimates the number of alleles in apopulation using a sample coverage method and then models rankedfrequencies for these alleles using the stretchedexponential/Weibull distribution. Simulation studies have shownthat both steps are robust to different mutation models. Thesecond method uses Bayesian approach to estimate both the numberof alleles and their frequencies simultaneously by assuming anon-informative prior distribution. The Bayesian approach is alsorobust to mutation models. Questions concerning the probability offinding a new allele, and the possible highest (or lowest)probability for a new-found allele can be answered by bothmethods. The advantages of our approaches include robustness tomutation model and ability to be easily extended to genotypic,haploid and protein structure data.

APA, Harvard, Vancouver, ISO, and other styles
8

Feng, Chunlin, and 馮淳林. "Robust estimation methods for image matching." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B29752693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Er, Fikret. "Robust methods in statistical shape analysis." Thesis, University of Leeds, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.342394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kudo, Jun S. M. Massachusetts Institute of Technology. "Robust adaptive high-order RANS methods." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/95563.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 89-94).
The ability to achieve accurate predictions of turbulent flow over arbitrarily complex geometries proves critical in the advancement of aerospace design. However, quantitatively accurate results from modern Computational Fluid Dynamics (CFD) tools are often accompanied by intractably high computational expenses and are significantly hindered by the lack of automation. In particular, the generation of a suitable mesh for a given flow problem often requires significant amounts of human input. This process however encounters difficulties for turbulent flows which exhibit a wide range of length scales that must be spatially resolved for an accurate solution. Higher-order adaptive methods are attractive candidates for addressing these deficiencies by promising accurate solutions at a reduced cost in a highly automated fashion. However, these methods in general are still not robust enough for industrial applications and significant advances must be made before the true realization of robust automated three-dimensional turbulent CFD. This thesis presents steps towards this realization of a robust high-order adaptive Reynolds-Averaged Navier-Stokes (RANS) method for the analysis of turbulent flows. Specifically, a discontinuous Galerkin (DG) discretization of the RANS equations and an output-based error estimation with an associated mesh adaptation algorithm is demonstrated. To improve the robustness associated with the RANS discretization, modifications to the negative continuation of the Spalart-Allmaras turbulence model are reviewed and numerically demonstrated on a test case. An existing metric-based adaptation framework is adopted and modified to improve the procedure's global convergence behavior. The resulting discretization and modified adaptation procedure is then applied to two-dimensional and three-dimensional turbulent flows to demonstrate the overall capability of the method.
by Jun Kudo.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
11

Mantzaflaris, Angelos. "Robust algebraic methods for geometric computing." Nice, 2011. http://www.theses.fr/2011NICE4030.

Full text
Abstract:
Le calcul géométrique en modélisation et en CAO nécessite la résolution approchée, et néanmoins certifiée, de systèmes polynomiaux. Nous introduisons de nouveaux algorithmes de sous-division afin de résoudre ce problème fondamental, calculant des développements en fractions continues des coordonnées des solutions. Au-delà des exemples concrets, nous fournissons des estimations de la complexité en bits et des bornes dans le modèle de RAM réelle. La difficulté principale de toute méthode de résolution consiste en les points singuliers isolés. Nous utilisons les systèmes locaux inverses et des calculs numériques certifiés afin d’obtenir un critère de certification pour traiter les solutions singulières. Ce faisant, nous sommes en mesure de vérifier l’existence et l’unicité des singularités d’une structure de multiplicité donnée. Nous traitons deux principales applications géométriques. La première : l’approximation des ensembles semi-algébriques plans, apparaît fréquemment dans la résolution de contraintes géométriques. Nous présentons un algorithme efficace pour identifier les composants connexes et pour calculer les approximations polygonales et isotopiques à l’ensemble exact. Dans un deuxième temps, nous présentons un cadre algébrique afin de calculer des diagrammes de Voronoi. Celui-ci sera applicable à tout type de diagramme dans lequel la distance à partir d’un site peut être exprimé par une fonction polynomiale à deux variables (anisotrope, diagramme de puissance, etc. ) Si cela n’est pas possible (par exemple diagramme de Apollonius, VD des éclipses etc. ), nous étendons la théorie aux distances implicitement données
Geometric computation in computer aided geometric design and solid modelling calls for solving non-linear polynomial systems in an approximate-yet-certified manner. We introduce new subdivision algorithms that tackle this fundamental problem. In particular, we generalize the univariate so-called continued fraction solver to general dimension. Fast bounding functions, unicity tests projection and preconditioning are employed to speed up convergence. Apart for practical experiments, we provide theoretical bit complexity estimates, as well as bounds in the real RAM model, by means of real condition numbers. A man bottleneck for any real solving method is singular isolated points. We employ local inverse systems and certified numerical computations, to provide certification criteria to treat singular solutions. In doing so, we are able to check existence and uniqueness of singularities of a given multiplicity structure using verification methods, based on interval arithmetic and fixed point theorems. Two major geometric applications are undertaken. First, the approximation of planar semi-algebraic sets, commonly occurring in constraint geometric solving. We present an efficient algorithm to identify connected components and, for a given precision, to compute polygonal and isotopic approximation of the exact set Second, we present an algebraic framework to compute generalized Voronoï diagrams, that is applicable to any diagram type in which the distance from a site can be expressed by a bi-variate polynomial function (anisotropic, power diagram etc. ) In cases where this is not possible (eg. Apollonius diagram, VD of ellipses and so on), we extend the theory to implicitly given distance functions
APA, Harvard, Vancouver, ISO, and other styles
12

Aranda, Cotta Higor Henrique. "Robust methods in multivariate time series." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC064.

Full text
Abstract:
Ce manuscrit propose de nouvelles méthodes d’estimation robustes pour les fonctions matricielles d’autocovariance et d’autocorrélation de séries chronologiques multivariées stationnaires pouvant présenter des valeurs aberrantes aléatoires additives. Ces fonctions jouent un rôle important dans l’identification et l’estimation des paramètres de modèles de séries chronologiques multivariées stationnaires. Nous proposons tout d'abord de nouveaux estimateurs des fonctions matricielles d’autocovariance et d’autocorrélation construits en utilisant une approche spectrale à l'aide du périodogramme matriciel. Comme dans le cas des estimateurs classiques des fonctions d’autocovariance et d’autocorrélation matricielles, ces estimateurs sont affectés par des observations aberrantes. Ainsi, toute procédure d'identification ou d'estimation les utilisant est directement affectée, ce qui entraîne des conclusions erronées. Pour atténuer ce problème, nous proposons l’utilisation de techniques statistiques robustes pour créer des estimateurs résistants aux observations aléatoires aberrantes. Dans un premier temps, nous proposons de nouveaux estimateurs des fonctions d’autocorvariance et d’autocorrélation de séries chronologiques univariées. Les domaines temporel et fréquentiel sont liés par la relation existant entre la fonction d’autocovariance et la densité spectrale. Le périodogramme étant sensible aux données aberrantes, nous obtenons un estimateur robuste en le remplaçant parle $M$-périodogramme. Les propriétés asymptotiques des estimateurs sont établies. Leurs performances sont étudiées au moyen de simulations numériques pour différentes tailles d’échantillons et différents scénarios de contamination. Les résultats empiriques indiquent que les méthodes proposées fournissent des valeurs proches de celles obtenues par la fonction d'autocorrélation classique quand les données ne sont pas contaminées et resistent à différents cénarios de contamination. Ainsi, les estimateurs proposés dans cette thèse sont des méthodes alternatives utilisables pour des séries chronologiques présentant ou non des valeurs aberrantes. Les estimateurs obtenus pour des séries chronologiques univariées sont ensuite étendus au cas de séries multivariées. Cette extension est simplifiée par le fait que le calcul du périodogramme croisé ne fait intervenir que les coefficients de Fourier de chaque composante de la série. Le $M$-périodogramme matriciel apparaît alors comme une alternative robuste au périodogramme matriciel pour construire des estimateurs robustes des fonctions matricielles d’autocovariance et d’autocorrélation. Les propriétés asymptotiques sont étudiées et des expériences numériques sont réalisées. Comme exemple d'application avec des données réelles, nous utilisons les fonctions proposées pour ajuster un modèle autoregressif par la méthode de Yule-Walker à des données de pollution collectées dans la région de Vitória au Brésil.Enfin, l'estimation robuste du nombre de facteurs dans les modèles factoriels de grande dimension est considérée afin de réduire la dimensionnalité. En présence de valeurs aberrantes, les critères d’information proposés par Bai & Ng (2002) tendent à surestimer le nombre de facteurs. Pour atténuer ce problème, nous proposons de remplacer la matrice de covariance standard par la matrice de covariance robuste proposée dans ce manuscrit. Nos simulations montrent qu'en l'absence de contamination, les méthodes standards et robustes sont équivalentes. En présence d'observations aberrantes, le nombre de facteurs estimés augmente avec les méthodes non robustes alors qu'il reste le même en utilisant les méthodes robustes. À titre d'application avec des données réelles, nous étudions des concentrations de polluant PM$_{10}$ mesurées dans la région de l'Île-de-France en France
This manuscript proposes new robust estimation methods for the autocovariance and autocorrelation matrices functions of stationary multivariates time series that may have random additives outliers. These functions play an important role in the identification and estimation of time series model parameters. We first propose new estimators of the autocovariance and of autocorrelation matrices functions constructed using a spectral approach considering the periodogram matrix periodogram which is the natural estimator of the spectral density matrix. As in the case of the classic autocovariance and autocorrelation matrices functions estimators, these estimators are affected by aberrant observations. Thus, any identification or estimation procedure using them is directly affected, which leads to erroneous conclusions. To mitigate this problem, we propose the use of robust statistical techniques to create estimators resistant to aberrant random observations.As a first step, we propose new estimators of autocovariance and autocorrelation functions of univariate time series. The time and frequency domains are linked by the relationship between the autocovariance function and the spectral density. As the periodogram is sensitive to aberrant data, we get a robust estimator by replacing it with the $M$-periodogram. The $M$-periodogram is obtained by replacing the Fourier coefficients related to periodogram calculated by the standard least squares regression with the ones calculated by the $M$-robust regression. The asymptotic properties of estimators are established. Their performances are studied by means of numerical simulations for different sample sizes and different scenarios of contamination. The empirical results indicate that the proposed methods provide close values of those obtained by the classical autocorrelation function when the data is not contaminated and it is resistant to different contamination scenarios. Thus, the estimators proposed in this thesis are alternative methods that can be used for time series with or without outliers.The estimators obtained for univariate time series are then extended to the case of multivariate series. This extension is simplified by the fact that the calculation of the cross-periodogram only involves the Fourier coefficients of each component from the univariate series. Thus, the $M$-periodogram matrix is a robust periodogram matrix alternative to build robust estimators of the autocovariance and autocorrelation matrices functions. The asymptotic properties are studied and numerical experiments are performed. As an example of an application with real data, we use the proposed functions to adjust an autoregressive model by the Yule-Walker method to Pollution data collected in the Vitória region Brazil.Finally, the robust estimation of the number of factors in large factorial models is considered in order to reduce the dimensionality. It is well known that the values random additive outliers affect the covariance and correlation matrices and the techniques that depend on the calculation of their eigenvalues and eigenvectors, such as the analysis principal components and the factor analysis, are affected. Thus, in the presence of outliers, the information criteria proposed by Bai & Ng (2002) tend to overestimate the number of factors. To alleviate this problem, we propose to replace the standard covariance matrix with the robust covariance matrix proposed in this manuscript. Our Monte Carlo simulations show that, in the absence of contamination, the standard and robust methods are equivalent. In the presence of outliers, the number of estimated factors increases with the non-robust methods while it remains the same using robust methods. As an application with real data, we study pollutant concentrations PM$_{10}$ measured in the Île-de-France region of France
Este manuscrito é centrado em propor novos métodos de estimaçao das funçoes de autocovariancia e autocorrelaçao matriciais de séries temporais multivariadas com e sem presença de observaçoes discrepantes aleatorias. As funçoes de autocovariancia e autocorrelaçao matriciais desempenham um papel importante na analise e na estimaçao dos parametros de modelos de série temporal multivariadas. Primeiramente, nos propomos novos estimadores dessas funçoes matriciais construıdas, considerando a abordagem do dominio da frequencia por meio do periodograma matricial, um estimador natural da matriz de densidade espectral. Como no caso dos estimadores tradicionais das funçoes de autocovariancia e autocorrelaçao matriciais, os nossos estimadores tambem sao afetados pelas observaçoes discrepantes. Assim, qualquer analise subsequente que os utilize é diretamente afetada causando conclusoes equivocadas. Para mitigar esse problema, nos propomos a utilizaçao de técnicas de estatistica robusta para a criaçao de estimadores resistentes as observaçoes discrepantes aleatorias. Inicialmente, nos propomos novos estimadores das funçoes de autocovariancia e autocorrelaçao de séries temporais univariadas considerando a conexao entre o dominio do tempo e da frequencia por meio da relaçao entre a funçao de autocovariancia e a densidade espectral, do qual o periodograma tradicional é o estimador natural. Esse estimador é sensivel as observaçoes discrepantes. Assim, a robustez é atingida considerando a utilizaçao do Mperiodograma. O M-periodograma é obtido substituindo a regressao por minimos quadrados com a M-regressao no calculo das estimativas dos coeficientes de Fourier relacionados ao periodograma. As propriedades assintoticas dos estimadores sao estabelecidas. Para diferentes tamanhos de amostras e cenarios de contaminaçao, a performance dos estimadores é investigada. Os resultados empiricos indicam que os métodos propostos provem resultados acurados. Isto é, os métodos propostos obtêm valores proximos aos da funçao de autocorrelaçao tradicional no contexto de nao contaminaçao dos dados. Quando ha contaminaçao, os M-estimadores permanecem inalterados. Deste modo, as funçoes de M-autocovariancia e de M-autocorrelaçao propostas nesta tese sao alternativas vi aveis para séries temporais com e sem observaçoes discrepantes. A boa performance dos estimadores para o cenario de séries temporais univariadas motivou a extensao para o contexto de séries temporais multivariadas. Essa extensao é direta, haja vista que somente os coeficientes de Fourier relativos à cada uma das séries univariadas sao necessarios para o calculo do periodograma cruzado. Novamente, a relaçao de dualidade entre o dominio da frequência e do tempo é explorada por meio da conexao entre a funçao matricial de autocovariancia e a matriz de densidade espectral de séries temporais multivariadas. É neste sentido que, o presente artigo propoe a matriz M-periodograma como um substituto robusto à matriz periodograma tradicional na criaçao de estimadores das funçoes matriciais de autocovariancia e autocorrelaçao. As propriedades assintoticas sao estudas e experimentos numéricos sao realizados. Como exemplo de aplicaçao à dados reais, nos aplicamos as funçoes propostas no artigo na estimaçao dos parâmetros do modelo de série temporal multivariada pelo método de Yule-Walker para a modelagem dos dados MP10 da regiao de Vitoria/Brasil. Finalmente, a estimaçao robusta dos numeros de fatores em modelos fatoriais aproximados de alta dimensao é considerada com o objetivo de reduzir a dimensionalidade. Ésabido que dados discrepantes afetam as matrizes de covariancia e correlaçao. Em adiçao, técnicas que dependem do calculo dos autovalores e autovetores dessas matrizes, como a analise de componentes principais e a analise fatorial, sao completamente afetadas. Assim, na presença de observaçoes discrepantes, o critério de informaçao proposto por Bai & Ng (2002) tende a superestimar o numero de fatores. [...]
APA, Harvard, Vancouver, ISO, and other styles
13

Singh, Jagmeet 1980. "Comparative analysis of robust design methods." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/35630.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2006.
Includes bibliographical references (p. 161-163).
Robust parameter design is an engineering methodology intended as a cost effective approach to improve the quality of products, processes and systems. Control factors are those system parameters that can be easily controlled and manipulated. Noise factors are those system parameters that are difficult and/or costly to control and are presumed uncontrollable. Robust parameter design involves choosing optimal levels of the controllable factors in order to obtain a target or optimal response with minimal variation. Noise factors bring variability into the system, thus affecting the response. The aim is to properly choose the levels of control factors so that the process is robust or insensitive to the variation caused by noise factors. Robust parameter design methods are used to make systems more reliable and robust to incoming variations in environmental effects, manufacturing processes and customer usage patterns. However, robust design can become expensive, time consuming, and/or resource intensive. Thus research that makes robust design less resource intensive and requires less number of experimental runs is of great value. Robust design methodology can be expressed as multi-response optimization problem.
(cont.) The objective functions of the problem being: maximizing reliability and robustness of systems, minimizing the information and/or resources required for robust design methodology, and minimizing the number of experimental runs needed. This thesis discusses various noise factor strategies which aim to reduce number of experimental runs needed to improve quality of system. Compound Noise and Take-The-Best-Few Noise Factors Strategy are such noise factor strategies which reduce experimental effort needed to improve reliability of systems. Compound Noise is made by combing all the different noise factors together, irrespective of the number of noise factors. But such a noise strategy works only for the systems which show effect sparsity. To apply the Take-The-Best-Few Noise Factors Strategy most important noise factors in system's noise factor space are found. Noise factors having significant impact on system response variation are considered important. Once the important noise factors are identified, they are kept independent in the noise factor array. By selecting the few most important noise factors for a given system, run size of experiment is minimized.
(cont.) Take-The-Best-Few Noise Factors Strategy is very effective for all kinds of systems irrespective of their effect sparsity. Generally Take-The-Best-Few Noise Factors Strategy achieves nearly 80% of the possible improvement for all systems. This thesis also tries to find the influence of correlation and variance of induced noise on quality of system. For systems that do not contain any significant three-factor interactions correlation among noise factors can be neglected. Hence amount of information needed to improve the quality of systems is reduced.
by Jagmeet Singh.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
14

Mirza, Muhammad Javed. "Robust methods in range image understanding /." The Ohio State University, 1992. http://rave.ohiolink.edu/etdc/view?acc_num=osu148777912090705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Heo, Gyeongyong. "Robust kernel methods in context-dependent fusion." [Gainesville, Fla.] : University of Florida, 2009. http://purl.fcla.edu/fcla/etd/UFE0041144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Zhao, Shiyu. "Nonparametric robust control methods for powertrain control." Thesis, University of Liverpool, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.548802.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Whitehouse, Emily J. "Robust methods in univariate time series models." Thesis, University of Nottingham, 2017. http://eprints.nottingham.ac.uk/41868/.

Full text
Abstract:
The size and power properties of a hypothesis test typically depend on a series of factors which are unobservable in practice. A branch of the econometric literature therefore considers robust testing methodologies that achieve good size-control and competitive power across a range of differing circumstances. In this thesis we discuss robust tests in three areas of time series econometrics: detection of explosive processes, unit root testing against nonlinear alternatives, and forecast evaluation in small samples. Recent research has proposed a method of detecting explosive processes that is based on forward recursions of OLS, right-tailed, Dickey-Fuller [DF] unit root tests. In Chapter 2 an alternative approach using GLS DF tests is considered. We derive limiting distributions for both mean-invariant and trend-invariant versions of OLS and GLS variants of the Phillips, Wu and Yu (2011) [PWY] test statistic under a temporary, locally explosive alternative. These limits are dependent on both the value of the initial condition and the start and end points of the temporary explosive regime. Local asymptotic power simulations show that a GLS version of the PWY statistic offers superior power when a large proportion of the data is explosive, but that the OLS approach is preferred for explosive periods of short duration. These power differences are magnified by the presence of an asymptotically non-negligible initial condition. We propose a union of rejections procedure that capitalises on the respective power advantages of both OLS and GLS-based approaches. This procedure achieves power close to the effective envelope provided by the two individual PWY tests across all settings of the initial condition and length of the explosive period considered in this chapter. We show that these results are also robust to the point in the sample at which the temporary explosive regime occurs. An application of the union procedure to NASDAQ daily prices confirms the empirical value of this testing strategy. Chapter 3 examines the local power of unit root tests against globally stationary exponential smooth transition autoregressive [ESTAR] alternatives under two sources of uncertainty: the degree of nonlinearity in the ESTAR model, and the presence of a linear deterministic trend. First, we show that the Kapetanios, Shin and Snell (2003) [KSS] test for nonlinear stationarity has local asymptotic power gains over standard Dickey-Fuller [DF] tests for certain degrees of nonlinearity in the ESTAR model, but that for other degrees of nonlinearity, the linear DF test has superior power. Second, we derive limiting distributions of demeaned, and demeaned and detrended KSS and DF tests under a local ESTAR alternative when a local trend is present in the DGP. We show that the power of the demeaned tests outperforms that of the detrended tests when no trend is present in the DGP, but deteriorates as the magnitude of the trend increases. We propose a union of rejections testing procedure that combines all four individual tests and show that this captures most of the power available from the individual tests across different degrees of nonlinearity and trend magnitudes. We also show that incorporating a trend detection procedure into this union testing strategy can result in higher power when a large trend is present in the DGP. An empirical application of our proposed union of rejections procedures to energy consumption data in 180 countries shows the value of these procedures in practical cases. In Chapter 4 we show that when computing standard Diebold-Mariano-type tests for equal forecast accuracy and forecast encompassing, the long-run variance can frequently be negative when dealing with multi-step-ahead predictions in small, but empirically relevant, sample sizes. We subsequently consider a number of alternative approaches to dealing with this problem, including direct inference in the problem cases and use of long-run variance estimators that guarantee positivity. The finite sample size and power of the different approaches are evaluated using an extensive Monte Carlo simulation exercise. Overall, for multi-step-ahead forecasts, we find that the recently proposed Coroneo and Iacone (2015) test, which is based on a weighted periodogram long-run variance estimator, offers the best finite sample size and power performance.
APA, Harvard, Vancouver, ISO, and other styles
18

Fallah, Alireza. "Robust accelerated gradient methods for machine learning." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122881.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 91-95).
In this thesis, we study the problem of minimizing a smooth and strongly convex function, which arises in different areas, including regularized regression problems in machine learning. To solve this optimization problem, we consider using first order methods which are popular due to their scalability with large data sets, and we study the case that the exact gradient information is not available. In this setting, a naive implementation of classical first order algorithms need not converge and even accumulate noise. This motivates consideration of robustness of algorithms to noise as another metric in designing fast algorithms. To address this problem, we first propose a definition for the robustness of an algorithm in terms of the asymptotic expected suboptimality of its iterate sequence to input noise power.
We focus on Gradient Descent and Accelerated Gradient methods and develop a framework based on a dynamical system representation of these algorithms to characterize their convergence rate and robustness to noise using tools from control theory and optimization. We provide explicit expressions for the convergence rate and robustness of both algorithms for the quadratic case, and also derive tractable and tight upper bounds for general smooth and strongly convex functions. We also develop a computational framework for choosing parameters of these algorithms to achieve a particular trade-off between robustness and rate. As a second contribution, we consider algorithms that can reach optimality (obtaining perfect robustness). The past literature provided lower bounds on the rate of decay of suboptimality in term of initial distance to optimality (in the deterministic case) and error due to gradient noise (in the stochastic case).
We design a novel multistage and accelerated universally optimal algorithm that can achieve both of these lower bounds simultaneously without knowledge of initial optimality gap or noise characterization. We finally illustrate the behavior of our algorithm through numerical experiments.
by Alireza Fallah.
S.M.
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
19

Lee, Ju Hee. "Robust Statistical Modeling through Nonparametric Bayesian Methods." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1275399497.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

McCaskey, Suzanne D. "Robust design of dynamic systems." Thesis, Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/24223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Arif, Omar. "Robust target localization and segmentation using statistical methods." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33882.

Full text
Abstract:
This thesis aims to contribute to the area of visual tracking, which is the process of identifying an object of interest through a sequence of successive images. The thesis explores kernel-based statistical methods, which map the data to a higher dimensional space. A pre-image framework is provided to find the mapping from the embedding space to the input space for several manifold learning and dimensional learning algorithms. Two algorithms are developed for visual tracking that are robust to noise and occlusions. In the first algorithm, a kernel PCA-based eigenspace representation is used. The de-noising and clustering capabilities of the kernel PCA procedure lead to a robust algorithm. This framework is extended to incorporate the background information in an energy based formulation, which is minimized using graph cut and to track multiple objects using a single learned model. In the second method, a robust density comparison framework is developed that is applied to visual tracking, where an object is tracked by minimizing the distance between a model distribution and given candidate distributions. The superior performance of kernel-based algorithms comes at a price of increased storage and computational requirements. A novel method is developed that takes advantage of the universal approximation capabilities of generalized radial basis function neural networks to reduce the computational and storage requirements for kernel-based methods.
APA, Harvard, Vancouver, ISO, and other styles
22

Ramström, Marcus, and Mattias Gungner. "Robust repair methods of primary structures in composite." Thesis, Linköpings universitet, Hållfasthetslära, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-94362.

Full text
Abstract:
A request of material change when performing repairs on composite parts of SAABs JAS 39 Gripen has lead to the initiation of this project. The aim is to create a quicker and more robust repair method. The requested method of repair is to use a direct-cured repair patch made of CFRP fabric instead of CFRP tape and to mount the patch with a scarf joint, see Figure 1.1. The fabric patch should then provide a robust quasi-isotropic repair, where the operator not is dependent of complete design data such as ply-directions etc. Today tape repairs are made on tape laminate and fabric repairs made on fabric laminate. The new method is to repair tape laminate with a fabric patch. This project will evaluate the possibility of implementing this method. The work started with a literature study to find out how repairs in composite parts of the airframe are being performed today. SAABs in-house analytical tools were then used to try to predict the results and examine some of the details in the questions at issue. Finite element models were then constructed to simulate a previous physical test program conducted to validate a repair method using a step joint and a direct-cured repair patch. If the FE models could show similar results as the physical tests the results from the FE models then can be assumed to be credible. The results of this project indicate that the change from fabric to tape in the repair patch can be done without disturbing the load path of a quasi isotropic composite laminate. Fabric repairs in orthotropic composite plates results in a knock-down of about 40%. The use of a scarf joint instead of a step joint should also work well as the repair patches show similar strains in the centre of the patches. The difference between step joint and scarf joint is the strain near the edge of the patch. It increases with scarf joint and it may lead to an earlier fibre failure in the repair patch. Results from the analysis of the bonded joint indicate that a scarf joint yields in a lower and more evenly distributed shear stress than the step joint. This indicates that the bonded joint in the step joint will reach failure earlier then the scarf joint.
APA, Harvard, Vancouver, ISO, and other styles
23

Jugessur, Deeptiman. "Robust object recognition using local appearance based methods." Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=33472.

Full text
Abstract:
In this thesis we present an approach to appearance-based object recognition using single camera images. The approach is based on using an attention mechanism to obtain visual features that are generic, robust and informative. The features themselves are recognized using principal components in the frequency domain. We show that we can perform robust appearance based object recognition by using the visual characteristics of only a small number of such features. The technique is robust to planar translations and rotations of the object being recognized due to our polar sampling in the frequency domain. We are able to recognize objects on different types of background clutter due to a masking technique we've developed. We are also able to handle a degree of occlusion as we make use of multiple features for the purposes of recognition.
The same approach is further applied in the field of robotics to provide a means for the automatic recognition of locations or landmarks in scenes typically encountered by mobile robots. Hence instead of only recognizing objects, we also present a means of using the same computational model to recognize locations, thus performing coarse localization.
APA, Harvard, Vancouver, ISO, and other styles
24

Talaya, López Julià. "Algorithms and Methods for Robust Geodetic kinematic Positioning." Doctoral thesis, Universitat Politècnica de Catalunya, 2003. http://hdl.handle.net/10803/5846.

Full text
Abstract:
El sistema NAVSTAR/GPS ha desenvolupat un paper molt important en les tècniques de posicionament cinemàtic geodèsiques, especialment en la determinació de la trajectòries per a la orientació de sensors aerotransportats d'observació de la terra. Amb l'excepció dels sensors fotogramètrics tradicionals la orientació dels sensors moderns aerotransportats depenen completament del posicionament GPS o bé de la integració GPS/INS. Per tant el posicionament GPS ha de ser precís i sobre tot fiable.

Aquesta tesis es basa en l'estudi de nous algorismes i configuracions de missions que permetin augmentar el nivells de robustesa i fiabilitat en la determinació de trajectòries cinemàtiques aèries. Des d'un punt de vista productiu la robustesa és molt important ja que és la clau per a la automatització dels sistemes de procés de dades.

En concret es proposa el modelatge mitjançant paràmetres estocàstics dels retards ionosfèrics i troposfèrics que afecten als observables GPS, es proposen mètodes per combinar les dades de diverses estacions de referència GPS tot introduint restriccions entre els diferents paràmetres a determinar i considerant les correlacions existents entre les observacions, així com la utilització d'estratègies de selecció de les situacions més favorables per a la determinació de les ambigüitats de cicle que afecten als observables GPS de precisió, addicionalment s'estudien els seus efectes en la robustesa i fiabilitat del posicionament cinemàtic GPS.

Cal destacar la proposta d'integració dels observables de diversos receptors GPS cinemàtics en una configuració multiantena, mitjançant l'ús de les observacions angulars d'un sistema IMU (Inertial Measurement Unit), per aconseguir un posicionament cinemàtic més robust i fiable. Aquesta tècnica obre la possibilitat de superar les oclusions dels satèl·lits GPS durant les maniobres de gir de l'avió, molt freqüents en els vols de recobriment territorial per a missions d'observació de la Terra.

Es presenten i s'analitzen algunes idees per a la integració del posicionament cinemàtic GPS i l'orientació de sensors aerotransportats. S'estudia la utilització de la informació obtinguda mitjançant l'orientació indirecta (total o parcial) de certs sensors per ajudar en la resolució de la ambigüitat que afecta a l'observable fase del sistema GPS. En concret es presenten els casos d'integració de les dades d'orientació d'una càmara fotogramètrica aèria i d'un sensor altímetre làser amb observacions de la constel·lació de satèl·lits GPS.

El treball es completa amb un estudi de la determinació de trajectòries utilitzant dades simulades de les noves constel·lacions de satèl·lits (GPS III i Galileo) que actualment es troben en fase de construcció i desplegament.
The NAVSTAR Global Positioning System, most commonly known as GPS, has played an important role in the development of high precision geodetic positioning techniques. The possibility of using the GPS constellation for kinematic geodetic positioning has provided the geodetic community with a very important tool on its goal to portrait the Earth's shape.

This work focuses on the reliability of geodetic kinematic GPS positioning. Different algorithms and methods for increasing the reliability of kinematic surveys are presented. An increase in reliability implies better chances of solving correct ambiguity parameters, and more redundancy simplifies the automation of the GPS processing. Automating kinematic GPS processing reduces the need for very well trained GPS operators, as well as operational costs.

Several ideas are presented to increase the amount of information available in kinematic GPS processing, such as using several reference stations, dynamical models for the ionosphere, global processing.Although some of these ideas have also been presented previously, a study of the impact on the reliability of surveys has been done.

A novel approach to use multiple kinematic receivers without adding new position parameters by making use of inertial measurements is presented. Their impact on reliability increase has also been proven.

In aerial surveys, GPS kinematic positioning is generally used for georeferencing data taken by airborne sensors. The use of the data observed by these sensors for facilitating GPS positioning is also part of the study. The integration of oriented photogrammetric pairs or laser distance measurements together with kinematic GPS positioning have been investigated, and have been proved very helpful in real life projects.

Finally, the increase in reliability in new constellation scenarios (modernized GPS and Galileo) has also been analyzed in order to know what the situation in future scenarios will be like.
APA, Harvard, Vancouver, ISO, and other styles
25

Hong, Zhihong. "Robust Coding Methods For Space-Time Wireless Communications." NCSU, 2002. http://www.lib.ncsu.edu/theses/available/etd-20020117-144929.

Full text
Abstract:

HONG, ZHIHONG. Robust Coding Methods For Space-Time Wireless Communications. (Under the direction of Dr. Brian L. Hughes.)Space-time coding can exploit the presence of multiple transmit and receive antennasto increase diversity, spectral efficiency, and received power, to improvethe performance in wireless communication systems. Thus far, most work on space-time coding has assumed highly idealized channel fading conditions (e.g., quasi-static or ideal fast fading)as well as perfect channel state information at the receiver. Both of these assumptionsare often questionable in practice. In this dissertation, we present a new and general coding architecture for multi-antennacommunications, which is designed to perform well under a wide variety of channel fading conditionsand which (when differentially encoded) does not require accurate channel estimatesat the receiver. The architecture combines serial concatenation of short, full-diversityspace-time block codes with bit-interleaved coded modulation. Under slow fadingconditions, we show that codes constructed in this way achieve full diversity and perform close to the best known space-time trellis codes of comparable complexity. Under fast fading conditions, we show that these same codes can achieve higher diversity than all previously knowncodes of the same complexity. When used with differential space-time modulation, thesecodes can be reliably detected with or without channel estimates at the transmitter or receiver. Moreover, when iterative decoding is applied, the performance of these codes couldbe further improved.

APA, Harvard, Vancouver, ISO, and other styles
26

Chassein, André [Verfasser]. "Robust Optimization: Complexity and Solution Methods / André Chassein." München : Verlag Dr. Hut, 2017. http://d-nb.info/1135596034/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Rayamajhi, Milan. "Efficient methods for robust shape optimisation for crashworthiness." Thesis, Queen Mary, University of London, 2014. http://qmro.qmul.ac.uk/xmlui/handle/123456789/8902.

Full text
Abstract:
Recently complex geometry and detailed Finite Element (FE) models have been used to capture the true behaviour of the structures for crashworthiness. Such model complexity, detailed FE model, high non-linearity of crash cases and high number of design variables for crashworthiness optimisation add to the required computational effort. Hence, engineering optimisation problems are currently highly restricted in exploring the entire design space and including the desired number of design parameters. Hence it is advantageous to reduce the computational effort to fully explore the design alternatives and also to study even more complex and computationally expensive problems. This thesis presents an efficient robust shape optimisation approach via the use of physical surrogate models, i.e. sub-models and models derived for the Equivalent Static Loads Method (ESLM). The classical simultaneous robust design optimisation (RDO) approach (where robustness analysis of each design is assessed) is modified to make use of the physical surrogate models. In the proposed RDO approach, design optimisations are made using sub-models and robustness analyses are made using either non-linear dynamic analysis or ESLM. The general idea is to approximate the robustness of designs at the start of the optimisation (using ESLM) and use accurate robustness evaluations (via non-linear dynamic analysis) towards the end of the optimisation where the optimisation has already found interesting regions of the design space. The approach is validated on crashworthiness design cases.
APA, Harvard, Vancouver, ISO, and other styles
28

Lee, Sharen Woon Yee. "Bayesian methods for the construction of robust chronologies." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:49c30401-9442-441f-b6b5-1539817e2c95.

Full text
Abstract:
Bayesian modelling is a widely used, powerful approach for reducing absolute dating uncertainties in archaeological research. It is important that the methods used in chronology building are robust and reflect substantial prior knowledge. This thesis focuses on the development and evaluation of two novel, prior models: the trapezoidal phase model; and the Poisson process deposition model. Firstly, the limitations of the trapezoidal phase model were investigated by testing the model assumptions using simulations. It was found that a simple trapezoidal phase model does not reflect substantial prior knowledge and the addition of a non-informative element to the prior was proposed. An alternative parameterisation was also presented, to extend its use to a contiguous phase scenario. This method transforms the commonly-used abrupt transition model to allow for gradual changes. The second phase of this research evaluates the use of Bayesian model averaging in the Poisson process deposition model. The use of model averaging extends the application of the Poisson process model to remove the subjectivity involved in model selection. The last part of this thesis applies these models to different case studies, including attempts at resolving the Iron Age chronological debate in Israel, at determining the age of an important Quaternary tephra, at refining a cave chronology, and at more accurately modelling the mid-Holocene elm decline in the British Isles. The Bayesian methods discussed in this thesis are widely applicable in modelling situations where the associated prior assumptions are appropriate. Therefore, they are not limited to the case studies addressed in this thesis, nor are they limited to analysing radiocarbon chronologies.
APA, Harvard, Vancouver, ISO, and other styles
29

Lottes, James William. "Toward robust algebraic multigrid methods for nonsymmetric problems." Thesis, University of Oxford, 2015. https://ora.ox.ac.uk/objects/uuid:f9ac1d47-6d6a-41a9-99b9-2559981a9ba3.

Full text
Abstract:
When analyzing symmetric problems and the methods for solving them, multigrid and algebraic multigrid in particular, one of the primary tools at the analyst's disposal is the energy norm associated with the problem. The lack of this tool is one of the many reasons analysis of nonsymmetric problems and methods for solving them is substantially more difficult than in the symmetric case. We show that there is an analog to the energy norm for a nonsymmetric matrix A, associated with a new absolute value we term the "form" absolute value. This new absolute value can be described as a symmetric positive definite solution to the matrix equation A*|A|-1A = |A|; it exists and is unique in particular whenever A has positive symmetric part. We then develop a novel convergence theory for a general two-level multigrid iteration for any such A, making use of the form absolute value. In particular, we derive a convergence bound in terms of a smoothing property and separate approximation properties for the interpolation and restriction (a novel feature). Finally, we present new algebraic multigrid heuristics designed specifically targeting this new theory, which we evaluate with numerical tests.
APA, Harvard, Vancouver, ISO, and other styles
30

Solano, Charris Elyn Lizeth. "Optimization methods for the robust vehicle routing problem." Thesis, Troyes, 2015. http://www.theses.fr/2015TROY0026/document.

Full text
Abstract:
Cette thèse aborde le problème de tournées de véhicules (VRP) adressant des incertitudes via l'optimisation robuste, en donnant le VRP Robuste (RVRP). D'abord, les incertitudes sont intégrées sur les temps de trajet. Ensuite, une version bi-objectif du RVRP (bi-RVRP) est considérée en prenant en compte les incertitudes sur les temps de trajet et les demandes. Pour résoudre le RVRP et le bi-RVRP, différentes méthodes sont proposées pour déterminer des solutions robustes en minimisant le pire cas. Un Programme Linéaire à Variables Mixtes Entières (MILP), six heuristiques constructives, un algorithme génétique (GA), une procédure de recherche locale et quatre stratégies itératives à démarrage multiple sont proposées : une procédure de recherche constructive adaptive randomisée (GRASP), une recherche locale itérée (ILS), une ILS à démarrage multiple (MS-ILS), et une MS-ILS basée sur des tours géants (MS-ILS-GT) convertis en tournées réalisables grâce à un découpage lexicographique. Concernant le bi-RVRP, le coût total des arcs traversés et la demande totale non satisfaite sont minimisés sur tous les scénarios. Pour résoudre le problème, différentes versions de métaheuristiques évolutives multi-objectif sont proposées et couplées à une recherche locale : l'algorithme évolutionnaire multi-objectif (MOEA) et l'algorithme génétique avec tri par non-domination version 2 (NSGAII). Différentes métriques sont utilisées pour mesurer l’efficience, la convergence, ainsi que la diversité des solutions pour tous ces algorithmes
This work extends the Vehicle Routing Problem (VRP) for addressing uncertainties via robust optimization, giving the Robust VRP (RVRP). First, uncertainties are handled on travel times/costs. Then, a bi-objective version (bi-RVRP) is introduced to handle uncertainty in both, travel times and demands. For solving the RVRP and the bi-RVRP different models and methods are proposed to determine robust solutions minimizing the worst case. A Mixed Integer Linear Program (MILP), several greedy heuristics, a Genetic Algorithm (GA), a local search procedure and four local search based algorithms are proposed: a Greedy Randomized Adaptive Search Procedure (GRASP), an Iterated Local Search (ILS), a Multi-Start ILS (MS-ILS), and a MS-ILS based on Giant Tours (MS-ILS-GT) converted into feasible routes via a lexicographic splitting procedure. Concerning the bi-RVRP, the total cost of traversed arcs and the total unmet demand are minimized over all scenarios. To solve the problem, different variations of multiobjective evolutionary metaheuristics are proposed and coupled with a local search procedure: the Multiobjective Evolutionary Algorithm (MOEA) and the Non-dominated Sorting Genetic Algorithm version 2 (NSGAII). Different metrics are used to measure the efficiency, the convergence as well as the diversity of solutions for all these algorithms
APA, Harvard, Vancouver, ISO, and other styles
31

Pindza, Edson. "Robust Spectral Methods for Solving Option Pricing Problems." University of the Western Cape, 2012. http://hdl.handle.net/11394/4092.

Full text
Abstract:
Doctor Scientiae - DSc
Robust Spectral Methods for Solving Option Pricing Problems by Edson Pindza PhD thesis, Department of Mathematics and Applied Mathematics, Faculty of Natural Sciences, University of the Western Cape Ever since the invention of the classical Black-Scholes formula to price the financial derivatives, a number of mathematical models have been proposed by numerous researchers in this direction. Many of these models are in general very complex, thus closed form analytical solutions are rarely obtainable. In view of this, we present a class of efficient spectral methods to numerically solve several mathematical models of pricing options. We begin with solving European options. Then we move to solve their American counterparts which involve a free boundary and therefore normally difficult to price by other conventional numerical methods. We obtain very promising results for the above two types of options and therefore we extend this approach to solve some more difficult problems for pricing options, viz., jump-diffusion models and local volatility models. The numerical methods involve solving partial differential equations, partial integro-differential equations and associated complementary problems which are used to model the financial derivatives. In order to retain their exponential accuracy, we discuss the necessary modification of the spectral methods. Finally, we present several comparative numerical results showing the superiority of our spectral methods.
APA, Harvard, Vancouver, ISO, and other styles
32

Bruffaerts, Christopher. "Contributions to robust methods in nonparametric frontier models." Doctoral thesis, Universite Libre de Bruxelles, 2014. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209244.

Full text
Abstract:
Les modèles de frontières sont actuellement très utilisés par beaucoup d’économistes, gestionnaires ou toute personne dite « decision-maker ». Dans ces modèles de frontières, le but du chercheur consiste à attribuer à des unités de production (des firmes, des hôpitaux ou des universités par exemple) une mesure de leur efficacité en terme de production. Ces unités (dénotées DMU-Decision-Making Units) utilisent-elles à bon escient leurs « inputs » et « outputs »? Font-elles usage de tout leur potentiel dans le processus de production?

L’ensemble de production est l’ensemble contenant toutes les combinaisons d’inputs et d’outputs qui sont physiquement réalisables dans une économie. De cet ensemble contenant p inputs et q outputs, la notion d’efficacité d ‘une unité de production peut être définie. Celle-ci se définie comme une distance séparant le DMU de la frontière de l’ensemble de production. A partir d’un échantillon de DMUs, le but est de reconstruire cette frontière de production afin de pouvoir y évaluer l’efficacité des DMUs. A cette fin, le chercheur utilise très souvent des méthodes dites « classiques » telles que le « Data Envelopment Analysis » (DEA).

De nos jours, le statisticien bénéficie de plus en plus de données, ce qui veut également dire qu’il n’a pas l’opportunité de faire attention aux données qui font partie de sa base de données. Il se peut en effet que certaines valeurs aberrantes s’immiscent dans les jeux de données sans que nous y fassions particulièrement attention. En particulier, les modèles de frontières sont extrêmement sensibles aux valeurs aberrantes et peuvent fortement influencer l’inférence qui s’en suit. Pour éviter que certaines données n’entravent une analyse correcte, des méthodes robustes sont utilisées.

Allier le côté robuste au problème d’évaluation d’efficacité est l’objectif général de cette thèse. Le premier chapitre plante le décor en présentant la littérature existante dans ce domaine. Les quatre chapitres suivants sont organisés sous forme d’articles scientifiques.

Le chapitre 2 étudie les propriétés de robustesse d’un estimateur d’efficacité particulier. Cet estimateur mesure la distance entre le DMU analysé et la frontière de production le long d’un chemin hyperbolique passant par l’unité. Ce type de distance très spécifique s’avère très utile pour définir l’efficacité de type directionnel.

Le chapitre 3 est l’extension du premier article au cas de l’efficacité directionnelle. Ce type de distance généralise toutes les distances de type linéaires pour évaluer l’efficacité d’un DMU. En plus d’étudier les propriétés de robustesse de l’estimateur d’efficacité de type directionnel, une méthode de détection de valeurs aberrantes est présentée. Celle-ci s’avère très utile afin d’identifier les unités de production influençantes dans cet espace multidimensionnel (dimension p+q).

Le chapitre 4 présente les méthodes d’inférence pour les efficacités dans les modèles nonparamétriques de frontière. En particulier, les méthodes de rééchantillonnage comme le bootstrap ou le subsampling s’avère être très utiles. Dans un premier temps, cet article montre comment améliorer l’inférence sur les efficacités grâce au subsampling et prouve qu’il n’est pas suffisant d’utiliser un estimateur d’efficacité robuste dans les méthodes de rééchantillonnage pour avoir une inférence qui soit fiable. C’est pourquoi, dans un second temps, cet article propose une méthode robuste de rééchantillonnage qui est adaptée au problème d’évaluation d’efficacité.

Finalement, le dernier chapitre est une application empirique. Plus précisément, cette analyse s’intéresse à l ‘efficacité des universités américaines publiques et privées au niveau de leur recherche. Des méthodes classiques et robustes sont utilisées afin de montrer comment tous les outils étudiés précédemment peuvent s’appliquer en pratique. En particulier, cette étude permet d’étudier l’impact sur l’efficacité des institutions américaines de certaines variables telles que l’enseignement, l’internationalisation ou la collaboration avec le monde de l’industrie.


Doctorat en sciences, Orientation statistique
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
33

Motamedian, Hamid Reza. "Robust Formulations for Beam-to-Beam Contact." Licentiate thesis, KTH, Hållfasthetslära (Avd.), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-183980.

Full text
Abstract:
Contact between beam elements is a specific category of contact problems which was introduced by Wriggers and Zavarise in 1997 for normal contact and later extended by Zavarise and Wriggers to include tangential and frictional contact. In these works, beam elements are assumed to have rigid circular cross-sections and each pair of elements cannot have more than one contact point. The method proposed in the early papers is based on introducing a gap function and calculating the incremental change of that gap function and its variation in terms of incremental change of the nodal displacement vector and its variation. Due to complexity of derivations, specially for tangential contact, it is assumed that beam elements have linear shape functions. Furthermore, moments at the contact point are ignored. In the work presented in this licentiate thesis, we mostly adress the questions of simplicity and robustness of implementations, which become critical once the number of contact is large. In the first paper, we have proposed a robust formulation for normal and tangential contact of beams in 3D space to be used with a penalty stiffness method. This formulation is based on the assumption that contact normal, tangents, and location are constant (independent of displacements) in each iteration, while they are updated between iterations. On the other hand, we have no restrictions on the shape functions of the underlying beam elements. This leads to a mathematically simpler derivation and equations, as the linearization of the variation of the gap function vanishes. The results from this formulation are verified and benchmarked through comparison with the results from the previous algorithms. The proposed method shows better convergence rates allowing for selecting larger loadsteps or broader ranges for penalty stiffness. The performance and robustness of the formulation is demonstrated through numerical examples. In the second paper, we have suggested two alternative methods to handle in-plane rotational contact between beam elements. The first method follows the method of linearizing the variation of gap function, originally proposed by Wriggers and Zavarise. To be able to do the calculations, we have assumed a linear shape function for the underlying beam elements. This method can be used with both penalty stiffness and Lagrange multiplier methods. In the second method, we have followed the same method that we used in our first paper, that is, using the assumption that the contact normal is independent of nodal displacements at each iteration, while it is updated between iterations. This method yields simpler equations and it has no limitations on the shape functions to be used for the beam elements, however, it is limited to penalty stiffness methods. Both methods show comparable convergence rates, performance and stability which is demonstrated through numerical examples.
Kontakt mellan balkelement är en speciell typ av kontaktproblem som först analyserades 1997 av Wriggers och Zavarise med avseende på kontakt i normalriktningen. Teorin utvecklades senare av Zavarise och Wriggers och  inkluderade då även kontakt i tangentiella riktningar. I dessa arbeten antas balkelementen ha ett styvt cirkulärt tvärsnitt och varje elementpar kan inte ha mer än en kontaktpunkt. Metodiken i dessa artiklar bygger på  att en glipfunktion införs och därefter beräknas den inkrementella förändringen av glipfunktionen, och också dess variation, som funktion av den inkrementella förändringen av förskjutningsvektorn och dess variation. På grund av de komplicerade härledningar som resulterar, speciellt för den tangentiella kontakten, antas det att balkelementen har linjära formfunktioner. Dessutom tas ingen hänsyn till de moment som uppstår vid kontaktpunkten. I de arbeten som presenteras i denna licentiatavhandling har vi valt att inrikta oss mot frågeställningar kring enkla och robusta implementeringar, något som blir viktigt först när problemet innefattar ett stort antal kontakter. I den första artikeln i avhandlingen föreslår vi en robust formulering för normal och tangentiell kontakt mellan balkar i en 3D-rymd.Formuleringen bygger på en kostnadsmetod och på antagandet att kontaktens normal- och tangentriktning samt dess läge förblir detsamma (oberoende av förskjutning) under varje iteration. Dock uppdateras dessa storheter mellan varje iteration. Å andra sidan har inga begränsningar införts för formfunktionerna hos de underliggande balkelementen. Detta leder till en matematiskt enklare härledning samt enklare ekvationer, eftersom variationen hos glipfunktionen försvinner. Resultat framtagna med hjälp av denna formulering har verifierats och jämförts med motsvarande resultat givna av andra metoder. Den föreslagna metoden ger snabbare konvergens vilket ger möjlighet att använda större laststeg eller större omfång hos styvheten i kontaktpunkten (s.k. kostnadsstyrhet). Genom att lösa numeriska exempel påvisas prestanda och robusthet hos den föreslagna formuleringen. I den andra artikeln föreslår vi två alternativa metoder för att hantera rotationer i kontaktplanet hos balkelementen. I den första metoden linjäriseras glipfunktionen. Denna metod presenterades först av Wriggers och Zavarise. För att kunna genomföra beräkningarna ansattes linjära formfunktioner för balkelementen. Den här metoden kan användas både med kostnadsmetoder och metoder baserade på Lagrangemultiplikatorer. I den andra föreslagna metoden har vi valt att följa samma tillvägagångsätt som i vår första artikel. Detta betyder att vi antar att kontaktens normalriktning är oberoende av förskjutningarna under en iteration men uppdateras sedan mellan iterationerna. Detta tillvägagångsätt ger enklare ekvationer och har inga begränsningar vad gäller de formfunktioner som används i balkelementen. Dock är metoden begränsad till att utnyttja kostnadsmetoder. Båda de föreslagna metoderna i denna artikel ger jämförbar konvergens, prestanda och stabilitet vilket påvisas genom att lösningar till olika numeriska exempel presenteras.

QC 20160408

APA, Harvard, Vancouver, ISO, and other styles
34

Elago, David. "Robust computational methods for two-parameter singular perturbation problems." Thesis, University of the Western Cape, 2010. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_1693_1308039217.

Full text
Abstract:

This thesis is concerned with singularly perturbed two-parameter problems. We study a tted nite difference method as applied on two different meshes namely a piecewise mesh (of Shishkin type) and a graded mesh (of Bakhvalov type) as well as a tted operator nite di erence method. We notice that results on Bakhvalov mesh are better than those on Shishkin mesh. However, piecewise uniform meshes provide a simpler platform for analysis and computations. Fitted operator methods are even simpler in these regards due to the ease of operating on uniform meshes. Richardson extrapolation is applied on one of the tted mesh nite di erence method (those based on Shishkin mesh) as well as on the tted operator nite di erence method in order to improve the accuracy and/or the order of convergence. This is our main contribution to this eld and in fact we have achieved very good results after extrapolation on the tted operator finitete difference method. Extensive numerical computations are carried out on to confirm the theoretical results.

APA, Harvard, Vancouver, ISO, and other styles
35

Galler, Michael. "Methods for more efficient, effective and robust speech recognition." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0032/NQ64560.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Kelly, John W. "Robust, Automated Methods for Filtering and Processing Neural Signals." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/256.

Full text
Abstract:
This dissertation presents novel tools for robust filtering and processing of neural signals. These tools improve upon existing methods and were shown to be effective under a variety of conditions. They are also simple to use, allowing researchers and clinicians to focus more time on the analysis of neural data and making many tasks accessible to non-expert personnel. The main contributions of this research were the creation of a generalized software framework for neural signal processing, the development of novel algorithms to filter common sources of noise, and an implementation of a brain-computer interface (BCI) decoder as an example application. The framework has a modular structure and provides simple methods to incorporate neural signal processing tasks and applications. The software was found to maintain precise timing and reliable communication between components. A simple user interface allowed real-time control of all system parameters, and data was efficiently streamed to disk to allow for offline analysis. One common source of contamination in neural signals is line noise. A method was developed for filtering this noise with a variable bandwidth filter capable of tracking a sinusoid’s frequency. The method is based on the adaptive noise canceling (ANC) technique and is referred to here as the adaptive sinusoid canceler (ASC). This filter effectively eliminates sinusoidal contamination by tracking its frequency and achieving a narrow bandwidth. The ASC was found to outperform comparative methods including standard notch filters and an adaptive line enhancer (ALE). Ocular artifacts (OAs) caused by eye movement can also present a large problem in neural recordings. Here, a wavelet-based technique was developed for efficiently removing these artifacts. The technique uses a discrete wavelet transform with an automatically selected decomposition level to localize artifacts in both time and frequency before removing them with thresholding. This method was shown to produce superior reduction of OAs when compared to regression, principal component analysis (PCA), and independent component analysis (ICA). Finally, the removal of spatially correlated broadband noise such as electromyographic (EMG) artifacts was addressed. A method termed the adaptive common average reference (ACAR) was developed as an effective method for removing this noise. The ACAR is based on a combination of the common average reference (CAR) and an ANC filter. In a convergent process, a weighted CAR provides a reference to an ANC filter, which in turn provides feedback to enhance the reference. This method outperformed the standard CAR and ICA under most circumstances. As an example application for the methods developed in this dissertation, a BCI decoder was implemented using linear regression with an elastic net penalty. This decoder provides automatic feature selection and a robust feature set. The software framework was found to provide reliable data for the decoder, and the filtering algorithms increased the availability of neural features that were usable for decoding.
APA, Harvard, Vancouver, ISO, and other styles
37

Chung, How James T. H. "Robust video coding methods for next generation communication networks." Thesis, University of Bristol, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.364957.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Xiong, Hong. "Robust adaptive methods and their applications in quadrupole resonance." [Gainesville, Fla.] : University of Florida, 2006. http://purl.fcla.edu/fcla/etd/UFE0013387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Chau, Loo Kung Gustavo Ramón. "Robust Minimmun Variance Beamformer using Phase Aberration Correction Methods." Master's thesis, Pontificia Universidad Católica del Perú, 2017. http://tesis.pucp.edu.pe/repositorio/handle/123456789/8498.

Full text
Abstract:
The minimum variance (MV) beamformer is an adaptive beamforming method that has the potential to enhance the resolution and contrast of ultrasound images. Although the sensitivity of the MV beamformer to steering vector errors and array calibration errors is well-documented in other fields, in ultrasound it has been tested only under gross sound speed errors. Several robust MV beamformers have been proposed, but have mainly reported robustness only in the presence of sound speed mismatches. Additionally the impact of PAC methods in mitigating the effects of phase aberration in MV beamformed images has not been observed Accordingly, this thesis report consists on two parts. On the first part, a more complete analysis of the effects of different types of aberrators on conventional MV beamforming and on a robust MV beamformer from the literature (Eigenspace-based Minimum Variance (ESMV) beamformer) is carried out, and the effects of three PAC algorithms and their impact on the performance of the MV beamformer are analyzed (MV-PC). The comparison is carried out on Field II simulations and phantom experiments with electronic aberration and tissue aberrators. We conclude that the sensitivity to speed of sound errors and aberration limit the use of the MV beamformer in clinical applications, and that the effect of aberration is stronger than previously reported in the literature. Additionally it is shown that under moderate and strong aberrating conditions, MV-PC is a preferable option to ESMV. On the second part, we propose a new, locally-adaptive, phase aberration correction method (LAPAC) able to improve both DAS and MV beamformers that integrates aberration correction for each point in the image domain into the formulation of the MV beamformer. The new method is tested using fullwave simulations of models of human abdominal wall, experiments with tissue aberrators, and in vivo carotid images. The LAPAC method is compared with conventional phase aberration correction with delay-and-sum beamforming (DAS-PC) and MV-PC. The proposed method showed between 1-4 dB higher contrast than DAS-PC and MV-PC in all cases, and LAPAC-MV showed better performance than LAPAC-DAS. We conclude that LAPAC may be a viable option to enhance ultrasound image quality of both DAS and MV in the presence of clinically-relevant aberrating conditions.
Tesis
APA, Harvard, Vancouver, ISO, and other styles
40

Oh, Myungho. "Robust pole assignment by output feedback using optimization methods." Thesis, University of Leicester, 1993. http://hdl.handle.net/2381/34809.

Full text
Abstract:
A robust output feedback pole assignment method, which seeks to achieve a robust solution in the sense that the assigned poles are as insensitive as possible to perturbations in the system parameters, is studied. In particular, this work is concerned with pole assignment in a specified region rather than assignment to exact positions, whereby the freedom to obtain a robust solution may be realized. The robust output feedback pole assignment problem is formulated as an optimization problem with a special structure in matrix form. Efficient optimization methods and numerical algorithms for solving such a problem are proposed by introducing a concept of the derivative of a matrix valued function. The homotopy method, which is known as a globally convergent method, is applied to solve the robust output feedback pole assignment problem to overcome possible difficulties with the choice of feasible starting point. A new algorithm based on the homotopy approach for solving the pole assignment problem is proposed. Numerical examples of the robust pole assignment problem demonstrate how the homotopy algorithm globally converges to optimal solutions regardless of initial starting points with an appropriately defined homotopy mapping. The proposed algorithms are illustrated using an aircraft case study. It is seen that the controllers obtained using robust pole assignment methods yield the robust flight control and maintain the closed-loop system properties closer to the nominal ones. They are shown to be more robust than those obtained by an alternative direct pole assignment method which is frequently used to develop aircraft control strategies without attempting to optimize any robustness criterion. Indeed, the robust output feedback pole assignment method proposed in this study is a method which can be applied for control system design to achieve one important design objective, robustness.
APA, Harvard, Vancouver, ISO, and other styles
41

Mays, James Edward. "Model robust regression: combining parametric, nonparametric, and semiparametric methods." Diss., Virginia Polytechnic Institute and State University, 1995. http://hdl.handle.net/10919/49937.

Full text
Abstract:
In obtaining a regression fit to a set of data, ordinary least squares regression depends directly on the parametric model formulated by the researcher. If this model is incorrect, a least squares analysis may be misleading. Alternatively, nonparametric regression (kernel or local polynomial regression, for example) has no dependence on an underlying parametric model, but instead depends entirely on the distances between regressor coordinates and the prediction point of interest. This procedure avoids the necessity of a reliable model, but in using no information from the researcher, may fit to irregular patterns in the data. The proper combination of these two regression procedures can overcome their respective problems. Considered is the situation where the researcher has an idea of which model should explain the behavior of the data, but this model is not adequate throughout the entire range of the data. An extension of partial linear regression and two methods of model robust regression are developed and compared in this context. These methods involve parametric fits to the data and nonparametric fits to either the data or residuals. The two fits are then combined in the most efficient proportions via a mixing parameter. Performance is based on bias and variance considerations.
Ph. D.
incomplete_metadata
APA, Harvard, Vancouver, ISO, and other styles
42

Anderson, Joseph T. "Geometric Methods for Robust Data Analysis in High Dimension." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488372786126891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Owadally, Muhammud Asaad. "Robust electronic circuit design using evolutionary and Taguchi methods." Master's thesis, University of Cape Town, 1997. http://hdl.handle.net/11427/21761.

Full text
Abstract:
Bibliography: pages 80-81.
In engineering, there is a wide range of applications where genetic optimizers are used. Two genetic optimizers used in this thesis namely, Population Based Incremental Learning ( PBIL ) and Cross generational selection Heterogeneous crossover Cataclysmic mutation ( CHC ), are tested on a series of circuit problems to fmd if robust electronic circuits can be built from evolutionary methods. The evolutionary algorithms were used to search the space of discrete component values from a range of manufactured preferred values to obtain robust electronic circuits. Parasitic effects were also modelled in the simulation to provide for a more realistic circuit.
APA, Harvard, Vancouver, ISO, and other styles
44

Hashem, Hussein Abdulahman. "Regularized and robust regression methods for high dimensional data." Thesis, Brunel University, 2014. http://bura.brunel.ac.uk/handle/2438/9197.

Full text
Abstract:
Recently, variable selection in high-dimensional data has attracted much research interest. Classical stepwise subset selection methods are widely used in practice, but when the number of predictors is large these methods are difficult to implement. In these cases, modern regularization methods have become a popular choice as they perform variable selection and parameter estimation simultaneously. However, the estimation procedure becomes more difficult and challenging when the data suffer from outliers or when the assumption of normality is violated such as in the case of heavy-tailed errors. In these cases, quantile regression is the most appropriate method to use. In this thesis we combine these two classical approaches together to produce regularized quantile regression methods. Chapter 2 shows a comparative simulation study of regularized and robust regression methods when the response variable is continuous. In chapter 3, we develop a quantile regression model with a group lasso penalty for binary response data when the predictors have a grouped structure and when the data suffer from outliers. In chapter 4, we extend this method to the case of censored response variables. Numerical examples on simulated and real data are used to evaluate the performance of the proposed methods in comparisons with other existing methods.
APA, Harvard, Vancouver, ISO, and other styles
45

Anderson, Cynthia 1962. "A Comparison of Five Robust Regression Methods with Ordinary Least Squares: Relative Efficiency, Bias and Test of the Null Hypothesis." Thesis, University of North Texas, 2001. https://digital.library.unt.edu/ark:/67531/metadc5808/.

Full text
Abstract:
A Monte Carlo simulation was used to generate data for a comparison of five robust regression estimation methods with ordinary least squares (OLS) under 36 different outlier data configurations. Two of the robust estimators, Least Absolute Value (LAV) estimation and MM estimation, are commercially available. Three authormodified variations on MM were also included (MM1, MM2, and MM3). Design parameters that were varied include sample size (n=60 and n=180), number of independent predictor variables (2, 3 and 6), outlier density (0%, 5% and 15%) and outlier location (2x,2y s, 8x8y s, 4x,8y s and 8x,4y s). Criteria on which the regression methods were measured are relative efficiency, bias and a test of the null hypothesis. Results indicated that MM2 was the best performing robust estimator on relative efficiency. The best performing estimator on bias was MM1. The best performing regression method on the test of the null hypothesis was MM2. Overall, the MM-type robust regression methods outperformed OLS and LAV on relative efficiency, bias, and the test of the null hypothesis.
APA, Harvard, Vancouver, ISO, and other styles
46

Sharda, Bikram. "Robust manufacturing system design using petri nets and bayesian methods." Texas A&M University, 2008. http://hdl.handle.net/1969.1/85935.

Full text
Abstract:
Manufacturing system design decisions are costly and involve significant investment in terms of allocation of resources. These decisions are complex, due to uncertainties related to uncontrollable factors such as processing times and part demands. Designers often need to find a robust manufacturing system design that meets certain objectives under these uncertainties. Failure to find a robust design can lead to expensive consequences in terms of lost sales and high production costs. In order to find a robust design configuration, designers need accurate methods to model various uncertainties and efficient ways to search for feasible configurations. The dissertation work uses a multi-objective Genetic Algorithm (GA) and Petri net based modeling framework for a robust manufacturing system design. The Petri nets are coupled with Bayesian Model Averaging (BMA) to capture uncertainties associated with uncontrollable factors. BMA provides a unified framework to capture model, parameter and stochastic uncertainties associated with representation of various manufacturing activities. The BMA based approach overcomes limitations associated with uncertainty representation using classical methods presented in literature. Petri net based modeling is used to capture interactions among various subsystems, operation precedence and to identify bottleneck or conflicting situations. When coupled with Bayesian methods, Petri nets provide accurate assessment of manufacturing system dynamics and performance in presence of uncertainties. A multi-objective Genetic Algorithm (GA) is used to search manufacturing system designs, allowing designers to consider multiple objectives. The dissertation work provides algorithms for integrating Bayesian methods with Petri nets. Two manufacturing system design examples are presented to demonstrate the proposed approach. The results obtained using Bayesian methods are compared with classical methods and the effect of choosing different types of priors is evaluated. In summary, the dissertation provides a new, integrated Petri net based modeling framework coupled with BMA based approach for modeling and performance analysis of manufacturing system designs. The dissertation work allows designers to obtain accurate performance estimates of design configurations by considering model, parameter and stochastic uncertainties associated with representation of uncontrollable factors. Multi-objective GA coupled with Petri nets provide a flexible and time saving approach for searching and evaluating alternative manufacturing system designs.
APA, Harvard, Vancouver, ISO, and other styles
47

Uysal, Selver Derya [Verfasser]. "Three Essays on Doubly Robust Estimation Methods / Selver Derya Uysal." Konstanz : Bibliothek der Universität Konstanz, 2012. http://d-nb.info/1033059943/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Bängtsson, Erik. "Robust preconditioned iterative solution methods for large-scale nonsymmetric problems /." Uppsala : Department of Information Technology, Uppsala University, 2005. http://www.it.uu.se/research/reports/lic/2005-006/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Choo, Wei-Chong. "Volatility forecasting with exponential weighting, smooth transition and robust methods." Thesis, University of Oxford, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.489421.

Full text
Abstract:
This thesis focuses on the forecasting of the volatility in financial returns. Our first main contribution is the introduction of two new approaches for combining volatility forecasts. One approach involves the use of discounted weighted least square. The second proposed approach is smooth transition (ST) combining, which allows the combining weights to change gradually and smoothly over time in response to changes in suitably chosen transition variables.
APA, Harvard, Vancouver, ISO, and other styles
50

Konya, Iuliu Vasile [Verfasser]. "Adaptive Methods for Robust Document Image Understanding / Iuliu Vasile Konya." Bonn : Universitäts- und Landesbibliothek Bonn, 2013. http://d-nb.info/1044869917/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography