Dissertations / Theses on the topic 'Linear estimation problems'

To see the other types of publications on this topic, follow the link: Linear estimation problems.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 42 dissertations / theses for your research on the topic 'Linear estimation problems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Edlund, Ove. "Solution of linear programming and non-linear regression problems using linear M-estimation methods /." Luleå, 1999. http://epubl.luth.se/1402-1544/1999/17/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

PIEROPAN, MIRKO. "Expectation Propagation Methods for Approximate Inference in Linear Estimation Problems." Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2918002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kaperick, Bryan James. "Diagonal Estimation with Probing Methods." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/90402.

Full text
Abstract:
Probing methods for trace estimation of large, sparse matrices has been studied for several decades. In recent years, there has been some work to extend these techniques to instead estimate the diagonal entries of these systems directly. We extend some analysis of trace estimators to their corresponding diagonal estimators, propose a new class of deterministic diagonal estimators which are well-suited to parallel architectures along with heuristic arguments for the design choices in their construction, and conclude with numerical results on diagonal estimation and ordering problems, demonstrating the strengths of our newly-developed methods alongside existing methods.
Master of Science
In the past several decades, as computational resources increase, a recurring problem is that of estimating certain properties very large linear systems (matrices containing real or complex entries). One particularly important quantity is the trace of a matrix, defined as the sum of the entries along its diagonal. In this thesis, we explore a problem that has only recently been studied, in estimating the diagonal entries of a particular matrix explicitly. For these methods to be computationally more efficient than existing methods, and with favorable convergence properties, we require the matrix in question to have a majority of its entries be zero (the matrix is sparse), with the largest-magnitude entries clustered near and on its diagonal, and very large in size. In fact, this thesis focuses on a class of methods called probing methods, which are of particular efficiency when the matrix is not known explicitly, but rather can only be accessed through matrix vector multiplications with arbitrary vectors. Our contribution is new analysis of these diagonal probing methods which extends the heavily-studied trace estimation problem, new applications for which probing methods are a natural choice for diagonal estimation, and a new class of deterministic probing methods which have favorable properties for large parallel computing architectures which are becoming ever-more-necessary as problem sizes continue to increase beyond the scope of single processor architectures.
APA, Harvard, Vancouver, ISO, and other styles
4

Schülke, Christophe. "Statistical physics of linear and bilinear inference problems." Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCC058.

Full text
Abstract:
Le développement récent de l'acquisition comprimée a permis de spectaculaires avancées dans la compréhension des problèmes d'estimation linéaire parcimonieuse. Ce développement a suscité un intérêt renouvelé pour les problèmes d'inférence linéaire et bilinéaire généralisée. Ces problèmes combinent une étape linéaire avec une étape non lineaire et probabiliste, à l'issue de laquelle des mesures sont effectuées. Ce type de situations se présente notamment en imagerie médicale et en astronomie. Cette thèse s'intéresse à des algorithmes pour la résolution de ces problèmes et à leur analyse théorique. Pour cela, nous utilisons des algorithmes de passage de message, qui permettent d'échantillonner des distributions de haute dimension. Ces algorithmes connaissent des changements de phase, qui se laissent analyser à l'aide de la méthode des répliques, initialement développée dans le cadre de la physique statistique des milieux désordonnés. L'analyse des phases révèle qu'elles correspondent à des domaines dans lesquels l'inférence est facile, difficile ou impossible. Les principales contributions de cette thèse sont de trois types. D'abord, l'application d'algorithmes connus à des problèmes concrets : détection de communautés, codes correcteurs d'erreurs ainsi qu'un système d'imagerie innovant. Ensuite, un nouvel algorithme traitant le problème de calibration aveugle de capteurs, potentiellement applicable à de nombreux systèmes de mesure. Enfin, une analyse théorique du problème de reconstruction de matrices à petit rang à partir de projections linéaires, ainsi qu'une analyse d'une instabilité présente dans les algorithmes d'inférence bilinéaire
The recent development of compressed sensing has led to spectacular advances in the under standing of sparse linear estimation problems as well as in algorithms to solve them. It has also triggered anew wave of developments in the related fields of generalized linear and bilinear inference problems. These problems have in common that they combine a linear mixing step and a nonlinear, probabilistic sensing step, producing indirect measurements of a signal of interest. Such a setting arises in problems such as medical or astronomical Imaging. The aim of this thesis is to propose efficient algorithms for this class of problems and to perform their theoretical analysis. To this end, it uses belief propagation, thanks to which high-dimensional distributions can be sampled efficiently, thus making a bayesian approach to inference tractable. The resulting algorithms undergo phase transitions that can be analyzed using the replica method, initially developed in statistical physics of disordered systems. The analysis reveals phases in which inference is easy, hard or impossible, corresponding to different energy landscapes of the problem. The main contributions of this thesis can be divided into three categories. First, the application of known algorithms to concrete problems : community detection, superposition codes and an innovative imaging system. Second, a new, efficient message-passing algorithm for blind sensor calibration, that could be used in signal processing for a large class of measurement systems. Third, a theoretical analysis of achievable performances in matrix compressed sensing and of instabilities in bayesian bilinear inference algorithms
APA, Harvard, Vancouver, ISO, and other styles
5

Mattavelli, Marco Mattavelli Marco Mattavelli Marco. "Motion analysis and estimation : from III-posed discrete linear inverse problems to MPEG-2 coding /." Lausanne, 1997. http://library.epfl.ch/theses/?nr=1596.

Full text
Abstract:
Thèse, Sciences techniques, EPF Lausanne, No 1596, 1997, Section de systèmes de communication. Rapporteur: M. Kunt ; Co-rapporteur: B. Macq ; Co-rapporteur: D. Mlynek ; Co-rapporteur: F. Pellandini.
APA, Harvard, Vancouver, ISO, and other styles
6

Barbier, Jean. "Statistical physics and approximate message-passing algorithms for sparse linear estimation problems in signal processing and coding theory." Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCC130.

Full text
Abstract:
Cette thèse s’intéresse à l’application de méthodes de physique statistique des systèmes désordonnés ainsi que de l’inférence à des problèmes issus du traitement du signal et de la théorie du codage, plus précisément, aux problèmes parcimonieux d’estimation linéaire. Les outils utilisés sont essentiellement les modèles graphiques et l’algorithme approximé de passage de messages ainsi que la méthode de la cavité (appelée analyse de l’évolution d’état dans le contexte du traitement de signal) pour son analyse théorique. Nous aurons également recours à la méthode des répliques de la physique des systèmes désordonnées qui permet d’associer aux problèmes rencontrés une fonction de coût appelé potentiel ou entropie libre en physique. Celle-ci permettra de prédire les différentes phases de complexité typique du problème, en fonction de paramètres externes tels que le niveau de bruit ou le nombre de mesures liées au signal auquel l’on a accès : l’inférence pourra être ainsi typiquement simple, possible mais difficile et enfin impossible. Nous verrons que la phase difficile correspond à un régime où coexistent la solution recherchée ainsi qu’une autre solution des équations de passage de messages. Dans cette phase, celle-ci est un état métastable et ne représente donc pas l’équilibre thermodynamique. Ce phénomène peut-être rapproché de la surfusion de l’eau, bloquée dans l’état liquide à une température où elle devrait être solide pour être à l’équilibre. Via cette compréhension du phénomène de blocage de l’algorithme, nous utiliserons une méthode permettant de franchir l’état métastable en imitant la stratégie adoptée par la nature pour la surfusion : la nucléation et le couplage spatial. Dans de l’eau en état métastable liquide, il suffit d’une légère perturbation localisée pour que se créer un noyau de cristal qui va rapidement se propager dans tout le système de proche en proche grâce aux couplages physiques entre atomes. Le même procédé sera utilisé pour aider l’algorithme à retrouver le signal, et ce grâce à l’introduction d’un noyau contenant de l’information locale sur le signal. Celui-ci se propagera ensuite via une "onde de reconstruction" similaire à la propagation de proche en proche du cristal dans l’eau. Après une introduction à l’inférence statistique et aux problèmes d’estimation linéaires, on introduira les outils nécessaires. Seront ensuite présentées des applications de ces notions. Celles-ci seront divisées en deux parties. La partie traitement du signal se concentre essentiellement sur le problème de l’acquisition comprimée où l’on cherche à inférer un signal parcimonieux dont on connaît un nombre restreint de projections linéaires qui peuvent être bruitées. Est étudiée en profondeur l’influence de l’utilisation d’opérateurs structurés à la place des matrices aléatoires utilisées originellement en acquisition comprimée. Ceux-ci permettent un gain substantiel en temps de traitement et en allocation de mémoire, conditions nécessaires pour le traitement algorithmique de très grands signaux. Nous verrons que l’utilisation combinée de tels opérateurs avec la méthode du couplage spatial permet d’obtenir un algorithme de reconstruction extrê- mement optimisé et s’approchant des performances optimales. Nous étudierons également le comportement de l’algorithme confronté à des signaux seulement approximativement parcimonieux, question fondamentale pour l’application concrète de l’acquisition comprimée sur des signaux physiques réels. Une application directe sera étudiée au travers de la reconstruction d’images mesurées par microscopie à fluorescence. La reconstruction d’images dites "naturelles" sera également étudiée. En théorie du codage, seront étudiées les performances du décodeur basé sur le passage de message pour deux modèles distincts de canaux continus. Nous étudierons un schéma où le signal inféré sera en fait le bruit que l’on pourra ainsi soustraire au signal reçu. Le second, les codes de superposition parcimonieuse pour le canal additif Gaussien est le premier exemple de schéma de codes correcteurs d’erreurs pouvant être directement interprété comme un problème d’acquisition comprimée structuré. Dans ce schéma, nous appliquerons une grande partie des techniques étudiée dans cette thèse pour finalement obtenir un décodeur ayant des résultats très prometteurs à des taux d’information transmise extrêmement proches de la limite théorique de transmission du canal
This thesis is interested in the application of statistical physics methods and inference to signal processing and coding theory, more precisely, to sparse linear estimation problems. The main tools are essentially the graphical models and the approximate message-passing algorithm together with the cavity method (referred as the state evolution analysis in the signal processing context) for its theoretical analysis. We will also use the replica method of statistical physics of disordered systems which allows to associate to the studied problems a cost function referred as the potential of free entropy in physics. It allows to predict the different phases of typical complexity of the problem as a function of external parameters such as the noise level or the number of measurements one has about the signal: the inference can be typically easy, hard or impossible. We will see that the hard phase corresponds to a regime of coexistence of the actual solution together with another unwanted solution of the message passing equations. In this phase, it represents a metastable state which is not the true equilibrium solution. This phenomenon can be linked to supercooled water blocked in the liquid state below its freezing critical temperature. Thanks to this understanding of blocking phenomenon of the algorithm, we will use a method that allows to overcome the metastability mimicing the strategy adopted by nature itself for supercooled water: the nucleation and spatial coupling. In supercooled water, a weak localized perturbation is enough to create a crystal nucleus that will propagate in all the medium thanks to the physical couplings between closeby atoms. The same process will help the algorithm to find the signal, thanks to the introduction of a nucleus containing local information about the signal. It will then spread as a "reconstruction wave" similar to the crystal in the water. After an introduction to statistical inference and sparse linear estimation, we will introduce the necessary tools. Then we will move to applications of these notions. They will be divided into two parts. The signal processing part will focus essentially on the compressed sensing problem where we seek to infer a sparse signal from a small number of linear projections of it that can be noisy. We will study in details the influence of structured operators instead of purely random ones used originally in compressed sensing. These allow a substantial gain in computational complexity and necessary memory allocation, which are necessary conditions in order to work with very large signals. We will see that the combined use of such operators with spatial coupling allows the implementation of an highly optimized algorithm able to reach near to optimal performances. We will also study the algorithm behavior for reconstruction of approximately sparse signals, a fundamental question for the application of compressed sensing to real life problems. A direct application will be studied via the reconstruction of images measured by fluorescence microscopy. The reconstruction of "natural" images will be considered as well. In coding theory, we will look at the message-passing decoding performances for two distincts real noisy channel models. A first scheme where the signal to infer will be the noise itself will be presented. The second one, the sparse superposition codes for the additive white Gaussian noise channel is the first example of error correction scheme directly interpreted as a structured compressed sensing problem. Here we will apply all the tools developed in this thesis for finally obtaining a very promising decoder that allows to decode at very high transmission rates, very close of the fundamental channel limit
APA, Harvard, Vancouver, ISO, and other styles
7

Krishnan, Rajet. "Problems in distributed signal processing in wireless sensor networks." Thesis, Manhattan, Kan. : Kansas State University, 2009. http://hdl.handle.net/2097/1351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kontak, Max [Verfasser]. "Novel algorithms of greedy-type for probability density estimation as well as linear and nonlinear inverse problems / Max Kontak." Siegen : Universitätsbibliothek der Universität Siegen, 2018. http://d-nb.info/1157094554/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pester, Cornelia. "A posteriori error estimation for non-linear eigenvalue problems for differential operators of second order with focus on 3D vertex singularities." Doctoral thesis, Berlin Logos-Verl, 2006. http://deposit.ddb.de/cgi-bin/dokserv?id=2806614&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pester, Cornelia. "A posteriori error estimation for non-linear eigenvalue problems for differential operators of second order with focus on 3D vertex singularities." Doctoral thesis, Logos Verlag Berlin, 2005. https://monarch.qucosa.de/id/qucosa%3A18520.

Full text
Abstract:
This thesis is concerned with the finite element analysis and the a posteriori error estimation for eigenvalue problems for general operator pencils on two-dimensional manifolds. A specific application of the presented theory is the computation of corner singularities. Engineers use the knowledge of the so-called singularity exponents to predict the onset and the propagation of cracks. All results of this thesis are explained for two model problems, the Laplace and the linear elasticity problem, and verified by numerous numerical results.
APA, Harvard, Vancouver, ISO, and other styles
11

Xiong, Jun. "Set-membership state estimation and application on fault detection." Phd thesis, Institut National Polytechnique de Toulouse - INPT, 2013. http://tel.archives-ouvertes.fr/tel-01068054.

Full text
Abstract:
La modélisation des systèmes dynamiques requiert la prise en compte d'incertitudes liées à l'existence inévitable de bruits (bruits de mesure, bruits sur la dynamique), à la méconnaissance de certains phénomènes perturbateurs mais également aux incertitudes sur la valeur des paramètres (spécification de tolérances, phénomène de vieillissement). Alors que certaines de ces incertitudes se prêtent bien à une modélisation de type statistique comme par exemple les bruits de mesure, d'autres se caractérisent mieux par des bornes, sans autre attribut. Dans ce travail de thèse, motivés par les observations ci-dessus, nous traitons le problème de l'intégration d'incertitudes statistiques et à erreurs bornées pour les systèmes linéaires à temps discret. Partant du filtre de Kalman Intervalle (noté IKF) développé dans [Chen 1997], nous proposons des améliorations significatives basées sur des techniques récentes de propagation de contraintes et d'inversion ensembliste qui, contrairement aux mécanismes mis en jeu par l'IKF, permettent d'obtenir un résultat garanti tout en contrôlant le pessimisme de l'analyse par intervalles. Cet algorithme est noté iIKF. Le filtre iIKF a la même structure récursive que le filtre de Kalman classique et délivre un encadrement de tous les estimés optimaux et des matrices de covariance possibles. L'algorithme IKF précédent évite quant à lui le problème de l'inversion des matrices intervalles, ce qui lui vaut de perdre des solutions possibles. Pour l'iIKF, nous proposons une méthode originale garantie pour l'inversion des matrices intervalle qui couple l'algorithme SIVIA (Set Inversion via Interval Analysis) et un ensemble de problèmes de propagation de contraintes. Par ailleurs, plusieurs mécanismes basés sur la propagation de contraintes sont également mis en oeuvre pour limiter l'effet de surestimation due à la propagation d'intervalles dans la structure récursive du filtre. Un algorithme de détection de défauts basé sur iIKF est proposé en mettant en oeuvre une stratégie de boucle semi-fermée qui permet de ne pas réalimenter le filtre avec des mesures corrompues par le défaut dès que celui-ci est détecté. A travers différents exemples, les avantages du filtre iIKF sont exposés et l'efficacité de l'algorithme de détection de défauts est démontré.
APA, Harvard, Vancouver, ISO, and other styles
12

Havet, Antoine. "Estimation de la loi du milieu d'une marche aléatoire en milieu aléatoire." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX033/document.

Full text
Abstract:
Introduit dans les années 1960, le modèle de la marche aléatoire en milieu aléatoire i.i.d. sur les entiers relatifs (ou MAMA) a récemment été l'objet d'un regain d'intérêt dans la communauté statistique.Divers travaux se sont en particulier intéressés à la question de l'estimation de la loi du milieu à partir de l'observation d'une unique trajectoire de la MAMA.Cette thèse s'inscrit dans cette dynamique.Dans un premier temps, nous considérons le problème d'estimation d'un point de vue fréquentiste. Lorsque la MAMA est transiente à droite ou récurrente, nous construisons le premier estimateur non paramétrique de la densité de la loi du milieu et obtenons une majoration du risque associé mesuré en norme infinie.Dans un deuxième temps, nous envisageons le problème d'estimation sous un angle Bayésien. Lorsque la MAMA est transiente à droite, nous démontrons la consistance à posteriori de l'estimateur Bayésien de la loi du milieu.La principale difficulté mathématique de la thèse a été l'élaboration des outils nécessaires à la preuve du résultat de consistance bayésienne.Nous démontrons pour cela une version quantitative de l'inégalité de concentration de type Mac Diarmid pour chaînes de Markov.Nous étudions également le temps de retour en 0 d'un processus de branchement en milieu aléatoire avec immigration. Nous montrons l'existence d'un moment exponentiel fini uniformément valable sur une classe de processus de branchement en milieu aléatoire. Le processus de branchement en milieu aléatoire constituant une chaîne de Markov, ce résultat permet alors d'expliciter la dépendance des constantes de l'inégalité de concentration en fonction des caractéristiques de ce processus
Introduced in the 1960s, the model of random walk in i.i.d. environment on integers (or RWRE) raised only recently interest in the statistical community. Various works have in particular focused on the estimation of the environment distribution from a single trajectory of the RWRE.This thesis extends the advances made in those works and offers new approaches to the problem.First, we consider the estimation problem from a frequentist point of view. When the RWRE is transient to the right or recurrent, we build the first non-parametric estimator of the density of the environment distribution and obtain an upper-bound of the associated risk in infinite norm.Then, we consider the estimation problem from a Bayesian perspective. When the RWRE is transient to the right, we prove the posterior consistency of the Bayesian estimator of the environment distribution.The main difficulty of the thesis was to develop the tools necessary to the proof of Bayesian consistency.For this purpose, we demonstrate a quantitative version of a Mac Diarmid's type concentration inequality for Markov chains.We also study the return time to 0 of a branching process with immigration in random environment (or BPIRE). We show the existence of a finite exponential moment uniformly valid on a class of BPIRE. The BPIRE being a Markov chain, this result enables then to make explicit the dependence of the constants of the concentration inequality with respect to the characteristics of the BPIRE
APA, Harvard, Vancouver, ISO, and other styles
13

Omar, Oumayma. "Sur la résolution des problèmes inverses pour les systèmes dynamiques non linéaires. Application à l’électrolocation, à l’estimation d’état et au diagnostic des éoliennes." Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENT083/document.

Full text
Abstract:
Cette thèse concerne principalement la résolution des problèmes d’inversion dynamiquedans le cadre des systèmes dynamiques non linéaires. Ainsi, un ensemble de techniquesbasées sur l’utilisation des trains de mesures passées et sauvegardées sur une fenêtreglissante, a été développé. En premier lieu, les mesures sont utilisées pour générerune famille de signatures graphiques, qui constituent un outil de classification permettantde discriminer les diverses valeurs des variables à estimer pour un système non linéairedonné. Cette première technique a été appliquée à la résolution de deux problèmes : leproblème d’électolocation d’un robot doté du sens électrique et le problème d’estimationd’état dans les systèmes à dynamiques non linéaires. Outre ces deux applications, destechniques d’inversion à horizon glissant spécifiques au problème de diagnostic des défautsd’éoliennes dans le cadre d’un benchmark international ont été développées. Cestechniques sont basées sur la minimisation de critères quadratiques basés sur des modèlesde connaissance
This thesis mainly concerns the resolution of dynamic inverse problems involvingnonlinear dynamical systems. A set of techniques based on the use of trains of pastmeasurements saved on a sliding window was developed. First, the measurements areused to generate a family of graphical signatures, which is a classification tool, in orderto discriminate between different values of variables to be estimated for a given nonlinearsystem. This technique was applied to solve two problems : the electrolocationproblem of a robot with electrical sense and the problem of state estimation in nonlineardynamical systems. Besides these two applications, receding horizon inversion techniquesdedicated to the fault diagnosis problem of a wind turbine proposed as an internationalbenchmark were developed. These techniques are based on the minimization of quadraticcriteria based on knowledge-based models
APA, Harvard, Vancouver, ISO, and other styles
14

Cioaca, Alexandru George. "A Computational Framework for Assessing and Optimizing the Performance of Observational Networks in 4D-Var Data Assimilation." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/51795.

Full text
Abstract:
A deep scientific understanding of complex physical systems, such as the atmosphere, can be achieved neither by direct measurements nor by numerical simulations alone. Data assimilation is a rigorous procedure to fuse information from a priori knowledge of the system state, the physical laws governing the evolution of the system, and real measurements, all with associated error statistics. Data assimilation produces best (a posteriori) estimates of model states and parameter values, and results in considerably improved computer simulations. The acquisition and use of observations in data assimilation raises several important scientific questions related to optimal sensor network design, quantification of data impact, pruning redundant data, and identifying the most beneficial additional observations. These questions originate in operational data assimilation practice, and have started to attract considerable interest in the recent past. This dissertation advances the state of knowledge in four dimensional variational (4D-Var) - data assimilation by developing, implementing, and validating a novel computational framework for estimating observation impact and for optimizing sensor networks. The framework builds on the powerful methodologies of second-order adjoint modeling and the 4D-Var sensitivity equations. Efficient computational approaches for quantifying the observation impact include matrix free linear algebra algorithms and low-rank approximations of the sensitivities to observations. The sensor network configuration problem is formulated as a meta-optimization problem. Best values for parameters such as sensor location are obtained by optimizing a performance criterion, subject to the constraint posed by the 4D-Var optimization. Tractable computational solutions to this "optimization-constrained" optimization problem are provided. The results of this work can be directly applied to the deployment of intelligent sensors and adaptive observations, as well as to reducing the operating costs of measuring networks, while preserving their ability to capture the essential features of the system under consideration.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
15

Gaspar, Jonathan. "Fluxmétrie et caractérisation thermiques instationnaires des dépôts des composants face au plasma du Tokamak JET par techniques inverses." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4739/document.

Full text
Abstract:
Ces travaux portent sur la résolution successive de deux problèmes inverses en transferts thermiques : l'estimation de la densité de flux en surface d'un matériau puis de la conductivité thermique équivalente d'une couche déposée en surface de ce matériau. Le modèle direct est bidimensionnel orthotrope (géométrie réelle d'un matériau composite), instationnaire, non-linéaire et ses équations sont résolues par éléments finis. Les matériaux étudiés sont les composants face au plasma (tuiles composite carbone-carbone) dans le Tokamak JET. La densité de flux recherchée varie avec une dimension spatiale et avec le temps. La conductivité du dépôt de surface varie spatialement et peut également varier au cours du temps pendant l'expérience (toutes les autres propriétés thermophysiques dépendent de la température). Les deux problèmes inverses sont résolus à l'aide de l'algorithme des gradients conjugués associé à la méthode de l'état adjoint pour le calcul exact du gradient. La donnée expérimentale utilisée pour la résolution du premier problème inverse (estimation de flux surfacique) est le thermogramme fourni par un thermocouple enfoui. Le second problème inverse utilise, lui, les variations spatio-temporelles de la température de surface du dépôt inconnu (thermographie infrarouge) pour identifier sa conductivité. Des calculs de confiance associée aux grandeurs identifiées sont réalisés avec la démarche Monte Carlo. Les méthodes mises au point pendant ces travaux aident à comprendre la dynamique de l'interaction plasma-paroi ainsi que la cinétique de formation des dépôts de carbone sur les composants et aideront au design des composants des machines futures (WEST, ITER)
This work deals with the successive resolution of two inverse heat transfer problems: the estimation of surface heat flux on a material and equivalent thermal conductivity of a surface layer on that material. The direct formulation is bidimensional, orthotropic (real geometry of a composite material), unsteady, non-linear and solved by finite elements. The studied materials are plasma facing components (carbon-carbon composite tiles) from Tokamak JET. The searched heat flux density varies with time and one dimension in space. The surface layers conductivity varies spatially and can vary with time during the experiment (the other thermophysical properties are temperature dependent). The two inverse problems are solved by the conjugate gradient method with the adjoint state method for the exact gradient calculation. The experimental data used for the first inverse problem resolution (surface heat flux estimation) is the thermogram provided by an embedded thermocouple. The second inverse problem uses the space and time variations of the surface temperature of the unknown surface layer (infrared thermography) for the conductivity identification. The confidence calculations associated to the estimated values are done by the Monte Carlo approach. The method developed during this thesis helps to the understanding of the plasma-wall interaction dynamic, as well as the kinetic of the surface carbon layer formation on the plasma facing components, and will be helpful to the design of the components of the future machines (WEST, ITER)
APA, Harvard, Vancouver, ISO, and other styles
16

Martins, André Christóvão Pio. "Estimativa do conjunto atrator e da área de atração para o problema de Lure estendido utilizando LMI." Universidade de São Paulo, 2005. http://www.teses.usp.br/teses/disponiveis/18/18133/tde-03022016-161753/.

Full text
Abstract:
A análise de estabilidade de sistemas não-lineares surge em vários campos da engenharia. Geralmente, esta análise consiste na determinação de conjuntos atratores estáveis e suas respectivas áreas de atração. Os métodos baseados no método de Lyapunov fornecem estimativas destes conjuntos. Entretanto, estes métodos envolvem uma busca não sistemática por funções auxiliares chamadas funções de Lyapunov. Este trabalho apresenta um procedimento sistemático, baseado no método de Lyapunov, para estimar conjuntos atratores e as respectivas áreas de atração para uma classe de sistemas não-lineares, aqui chamado de problema de Lure estendido. Este problema consiste de sistemas não-lineares que podem ser escritos na forma do problema de Lure, cuja função não-linear pode violar a condição de setor em torno da origem. O procedimento desenvolvido é baseado na extensão do princípio de invariância de LaSalle e usa as funções de Lyapunov genéricas do problema de Lure para estimar o conjunto atrator e sua respectiva área de atração. Os parâmetros das funções de Lyapunov são obtidos resolvendo um problema de otimização que pode ser colocado na forma de desigualdades matriciais lineares (LMIs).
The stability analysis of nonlinear systems is present in several engineering fields. Usually, the concern is the determination of stable attractor sets and their associated attraction areas. Methods based on the Lyapunov method provide estimates of these sets. However, these methods involve a nonsystematic search for auxiliary functions called Lyapunov functions. This work presents a systematic procedure, based on Lyapunov method, to estimate attractor sets and their associated attraction areas of a class of nonlinear systems, called in this work extended Lure problem. The extended Lure problem consists of nonlinear systems like those of Lure problem where the nonlinear functions can violate the sector conditions around the origin. The developed procedure is based on the extension of invariance LaSalle principle and uses the general Lyapunov functions of Lure problem to estimate the attractor set and their associated attraction area. The parameters of the Lyapunov functions are obtained solving an optimization problem write like a linear matrix inequality (LMI).
APA, Harvard, Vancouver, ISO, and other styles
17

Wikström, Gunilla. "Computation of Parameters in some Mathematical Models." Doctoral thesis, Umeå University, Computing Science, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-565.

Full text
Abstract:

In computational science it is common to describe dynamic systems by mathematical models in forms of differential or integral equations. These models may contain parameters that have to be computed for the model to be complete. For the special type of ordinary differential equations studied in this thesis, the resulting parameter estimation problem is a separable nonlinear least squares problem with equality constraints. This problem can be solved by iteration, but due to complicated computations of derivatives and the existence of several local minima, so called short-cut methods may be an alternative. These methods are based on simplified versions of the original problem. An algorithm, called the modified Kaufman algorithm, is proposed and it takes the separability into account. Moreover, different kinds of discretizations and formulations of the optimization problem are discussed as well as the effect of ill-conditioning.

Computation of parameters often includes as a part solution of linear system of equations Ax = b. The corresponding pseudoinverse solution depends on the properties of the matrix A and vector b. The singular value decomposition of A can then be used to construct error propagation matrices and by use of these it is possible to investigate how changes in the input data affect the solution x. Theoretical error bounds based on condition numbers indicate the worst case but the use of experimental error analysis makes it possible to also have information about the effect of a more limited amount of perturbations and in that sense be more realistic. It is shown how the effect of perturbations can be analyzed by a semi-experimental analysis. The analysis combines the theory of the error propagation matrices with an experimental error analysis based on randomly generated perturbations that takes the structure of A into account

APA, Harvard, Vancouver, ISO, and other styles
18

Schinzinger, Edo [Verfasser]. "Credibility estimation in insurance data: generalized linear models and evolutionary modeling / Edo Schinzinger." Ulm : Universität Ulm. Fakultät für Mathematik und Wirtschaftswissenschaften, 2015. http://d-nb.info/1074196236/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Iossaqui, Juliano Gonçalves 1982. "Técnicas não lineares de controle e filtragem aplicadas ao problema de rastreamento de trajetórias de robôs móveis com deslizamento longitudinal das rodas." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/262930.

Full text
Abstract:
Orientador: Juan Francisco Camino dos Santos
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica
Made available in DSpace on 2018-08-22T20:54:00Z (GMT). No. of bitstreams: 1 Iossaqui_JulianoGoncalves_D.pdf: 6240978 bytes, checksum: 597b8a76b18670b60a9c6b9804b7b692 (MD5) Previous issue date: 2013
Resumo: Esta tese trata do problema de controle de trajetórias de robôs móveis não holonômicos com deslizamento longitudinal das rodas. As estratégias de controle propostas são projetadas usando dois modelos, um cinemático e um dinâmico, que consideram os deslizamentos longitudinais das rodas como parâmetros desconhecidos. A primeira estratégia de controle consiste em um controlador adaptativo projetado com base em um modelo cinemático que utiliza como entrada de controle, as velocidades angulares das rodas. Essas velocidades angulares são fornecidas por uma lei de controle cinemática que utiliza estimativa dos parâmetros de deslizamento desconhecidos, obtidas por meio de uma lei de adaptação. A segunda estratégia de controle consiste em um controlador adaptativo projetado com base em um modelo dinâmico simplificado que utilizam como entrada de controle, forças de propulsão aplicadas no centro das rodas. A lei de controle, que fornece essas forças, é projetada aplicando-se a técnica backstepping ao modelo dinâmico reduzido, que foi obtido com a utilização do método da dinâmica inversa. Os parâmetros de deslizamento longitudinal desconhecidos, necessários para a utilização do método da dinâmica inversa, são estimados por uma lei de adaptação. O filtro de Kalman unscented também é utilizado para estimar os parâmetros de deslizamento desconhecidos. Essas estimativas são utilizadas, da mesma forma que as estimativas obtidas pela lei de adaptação, nas leis que fornecem as velocidades angulares e as forças de propulsão das rodas. As estratégias propostas, baseadas na teoria de controle adaptativo e na teoria de filtragem, diferenciam-se basicamente pela técnica que utilizam para estimar os parâmetros de deslizamento. No caso das estratégias adaptativas, a estabilidade do sistema em malha fechada é garantida pela teoria de Lyapunov. Simulações numéricas são realizadas para apresentar o desempenho das estratégias de controle propostas em termos do erro de postura do robô para diferentes perfis de deslizamento
Abstract: This thesis deals with the trajectory tracking control problem of nonholonomic mobile robots with longitudinal slip of the wheels. The proposed control strategies are designed using a kinematic model and a dynamic model which consider the longitudinal slip of the wheels as unknown parameters. The first proposed control strategy consists in an adaptive controller based on a kinematic model that uses the wheel angular velocities as control input. These angular velocities are provided by a kinematic control law which uses the unknown slip parameters estimated by an adaptive rule. The second proposed control strategy consists in an adaptive controller based on a simplified dynamic model that uses the thrust forces applied on the center of the wheels as control input. The control law that provides these thrust forces is designed using the backstepping technique applied to a reduced dynamic model obtained using the inverse dynamic method. The unknown longitudinal slip parameters necessary to use the inverse dynamic method are estimated by an adaptation rule. The unscented Kalman filter is also used to estimate the unknown slip parameters. These estimates are used, in the same way as the estimates obtained by the adaptation rule, by the control laws that provide the angular velocities and the thrust forces. The main difference between the proposed control strategies, based on the adaptive control theory and on the filtering theory, is given by the technique used to estimate the slip parameters. In the case of the adaptive strategies, the stability of the closed-loop system is ensured by the Lyapunov theory. Numerical simulations show the performance of the proposed control strategies in terms of the posture error of the robot with different wheels slip profiles
Doutorado
APA, Harvard, Vancouver, ISO, and other styles
20

EUVRARD, GUILLAUME. "Estimation d'une regression non lineaire par reseaux de neurones; application a un probleme de robotique mobile." Paris 6, 1993. http://www.theses.fr/1993PA066087.

Full text
Abstract:
La these fait le lien entre l'apprentissage connexionniste et le probleme d'estimer une fonction de regression a partir d'exemples. Dans un premier temps, l'apprentissage connexionniste est decrit, et nous montrons qu'il tend a estimer la fonction de regression. La suite de la these est consacree au probleme d'estimer une telle fonction. Nous donnons les limites theoriques inherentes a ce probleme, et les principaux algorithmes de resolution. Nous comparons certains de ces algorithmes sur des donnees provenant d'un simulateur de robot mobile
APA, Harvard, Vancouver, ISO, and other styles
21

Iwaza, Lana. "Joint Source-Network Coding & Decoding." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112048/document.

Full text
Abstract:
Dans les réseaux traditionnels, la transmission de flux de données s'effectuaient par routage des paquets de la source vers le ou les destinataires. Le codage réseau (NC) permet aux nœuds intermédiaires du réseau d'effectuer des combinaisons linéaires des paquets de données qui arrivent à leurs liens entrants. Les opérations de codage ont lieu dans un corps de Galois de taille finie q. Aux destinataires, le décodage se fait par une élimination de Gauss des paquets codés-réseau reçus. Cependant, dans les réseaux sans fils, le codage réseau doit souvent faire face à des erreurs de transmission causées par le bruit, les effacements, et les interférences. Ceci est particulièrement problématique pour les applications temps réel, telle la transmission de contenus multimédia, où les contraintes en termes de délais d'acheminement peuvent aboutir à la réception d'un nombre insuffisant de paquets, et par conséquent à des difficultés à décoder les paquets transmis. Dans le meilleurs des cas, certains paquets arrivent à être décodés. Dans le pire des cas, aucun paquet ne peut être décodé.Dans cette thèse, nous proposons des schémas de codage conjoint source-réseau dont l'objectif est de fournir une reconstruction approximative de la source, dans des situations où un décodage parfait est impossible. L'idée consiste à exploiter la redondance de la source au niveau du décodeur afin d'estimer les paquets émis, même quand certains de ces paquets sont perdus après avoir subi un codage réseau. La redondance peut être soit naturelle, c'est-à-dire déjà existante, ou introduite de manière artificielle.Concernant la redondance artificielle, le codage à descriptions multiples (MDC) est choisi comme moyen d'introduire de la redondance structurée entre les paquets non corrélés. En combinant le codage à descriptions multiples et le codage réseau, nous cherchons à obtenir une qualité de reconstruction qui s'améliore progressivement avec le nombre de paquets codés-réseau reçus.Nous considérons deux approches différentes pour générer les descriptions. La première approche consiste à générer les descriptions par une expansion sur trame appliquée à la source avant la quantification. La reconstruction de données se fait par la résolution d'un problème d' optimisation quadratique mixte. La seconde technique utilise une matrice de transformée dans un corps de Galois donné, afin de générer les descriptions, et le décodage se fait par une simple éliminationde Gauss. Ces schémas sont particulièrement intéressants dans un contexte de transmission de contenus multimédia, comme le streaming vidéo, où la qualité s'améliore avec le nombre de descriptions reçues.Une seconde application de tels schémas consiste en la diffusion de données vers des terminaux mobiles à travers des canaux de transmission dont les conditions sont variables. Dans ce contexte, nous étudions la qualité de décodage obtenue pour chacun des deux schémas de codage proposés, et nous comparons les résultats obtenus avec ceux fournis par un schéma de codage réseau classique.En ce qui concerne la redondance naturelle, un scénario typique est celui d'un réseau de capteurs, où des sources géographiquement distribuées prélèvent des mesures spatialement corrélées. Nous proposons un schéma dont l'objectif est d'exploiter cette redondance spatiale afin de fournir une estimation des échantillons de mesures transmises par la résolution d'un problème d'optimisation quadratique à variables entières. La qualité de reconstruction est comparée à celle obtenue à travers un décodage réseau classique
While network data transmission was traditionally accomplished via routing, network coding (NC) broke this rule by allowing network nodes to perform linear combinations of the upcoming data packets. Network operations are performed in a specific Galois field of fixed size q. Decoding only involves a Gaussian elimination with the received network-coded packets. However, in practical wireless environments, NC might be susceptible to transmission errors caused by noise, fading, or interference. This drawback is quite problematic for real-time applications, such as multimediacontent delivery, where timing constraints may lead to the reception of an insufficient number of packets and consequently to difficulties in decoding the transmitted sources. At best, some packets can be recovered, while in the worst case, the receiver is unable to recover any of the transmitted packets.In this thesis, we propose joint source-network coding and decoding schemes in the purpose of providing an approximate reconstruction of the source in situations where perfect decoding is not possible. The main motivation comes from the fact that source redundancy can be exploited at the decoder in order to estimate the transmitted packets, even when some of them are missing. The redundancy can be either natural, i.e, already existing, or artificial, i.e, externally introduced.Regarding artificial redundancy, we choose multiple description coding (MDC) as a way of introducing structured correlation among uncorrelated packets. By combining MDC and NC, we aim to ensure a reconstruction quality that improves gradually with the number of received network-coded packets. We consider two different approaches for generating descriptions. The first technique consists in generating multiple descriptions via a real-valued frame expansion applied at the source before quantization. Data recovery is then achieved via the solution of a mixed integerlinear problem. The second technique uses a correlating transform in some Galois field in order to generate descriptions, and decoding involves a simple Gaussian elimination. Such schemes are particularly interesting for multimedia contents delivery, such as video streaming, where quality increases with the number of received descriptions.Another application of such schemes would be multicasting or broadcasting data towards mobile terminals experiencing different channel conditions. The channel is modeled as a binary symmetric channel (BSC) and we study the effect on the decoding quality for both proposed schemes. Performance comparison with a traditional NC scheme is also provided.Concerning natural redundancy, a typical scenario would be a wireless sensor network, where geographically distributed sources capture spatially correlated measures. We propose a scheme that aims at exploiting this spatial redundancy, and provide an estimation of the transmitted measurement samples via the solution of an integer quadratic problem. The obtained reconstruction quality is compared with the one provided by a classical NC scheme
APA, Harvard, Vancouver, ISO, and other styles
22

SOUHAIL, Hicham. "Schema volumes finis : Estimation d'erreur a posteriori hierarchique par elements finis mixtes. Resolution de problemes d'elasticite non-linearie." Phd thesis, Ecole Centrale de Lyon, 2004. http://tel.archives-ouvertes.fr/tel-00005418.

Full text
Abstract:
La partie 1 releve de l'Analyse Numerique. Partant de l'interpretation Element Finis Mixtes des schemas volumes finis classiques, l'estimation a posteriori de l'erreur est analysee dans la hierarchie des elements de Raviart-Thomas. Un estimateur calculable est explicite pour ces schemas volumes finis.
La partie 2 introduit, d'abord un maillage rectangulaire, puis un maillage structure, une famille de schemas volumes finis de type differences finies. Des essais numeriques sur des problemes modeles montrent que l'ordre prevu par l'analyse peut etre atteint.
La partie 3 presente l'application de ces schemas volumes finis a la simulation numerique du comportement d'un bloc de gomme en presence d'une fissure finie. Il s'agit d'un materiau hyperelastique compressible en grandes deformations et differents tenseurs de contraintes, avec tests en quasi-incompressible et des simulations d'endommagement.
APA, Harvard, Vancouver, ISO, and other styles
23

Marnitz, Philipp [Verfasser], Axel [Akademischer Betreuer] Munk, Russell [Akademischer Betreuer] Luke, and Thorsten [Akademischer Betreuer] Hohage. "Statistical multiresolution estimatiors in linear inverse problems - foundations and algorithmic aspects / Philipp Marnitz. Gutachter: Axel Munk ; Russell Luke ; Thorsten Hohage. Betreuer: Axel Munk." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2010. http://d-nb.info/1043515356/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Marnitz, Philipp Verfasser], Axel [Akademischer Betreuer] [Munk, Russell [Akademischer Betreuer] Luke, and Thorsten [Akademischer Betreuer] Hohage. "Statistical multiresolution estimatiors in linear inverse problems - foundations and algorithmic aspects / Philipp Marnitz. Gutachter: Axel Munk ; Russell Luke ; Thorsten Hohage. Betreuer: Axel Munk." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2010. http://d-nb.info/1043515356/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Cherfils, Laurence. "Méthode de cheminement adaptative pour les problèmes semi-linéaires dépendant d'un paramètre." Université Joseph Fourier (Grenoble), 1996. http://www.theses.fr/1996GRE10105.

Full text
Abstract:
Ce travail consiste en l'élaboration d'une méthode de cheminement, dite également méthode de continuation, destinée aux problèmes de bifurcation qui apparaîssent lors de la discrétisation par éléments finis P1 d'une EDP semi-linéaire dépendant d'un paramètre. Afin de construire une méthode suffisamment robuste pour traiter les problèmes dont les solutions présentent des phénomènes de couches limites ou de singularités, nous avons intégré à la méthode de cheminement introduite par E. L. Allgower et K. Georg des techniques d'éléments finis adaptatifs, ainsi qu'une implémentation parallèle sur un réseau de stations, afin de diminuer les besoins en place mémoire et le temps de calculs nécessaires à l'évaluation d'une branche entière de solutions de bonne précision. Dans le cas de problèmes mono-dimensionnels, des essais sont effectués pour améliorer la qualité de la solution en déplaçant les nœuds du maillage, afin d'éviter le recours au raffinement. Enfin, sur un exemple où la singularité des solutions est dûe à la géométrie du domaine sur lequel est posé l'EDP, nous montrons comment l'utilisation d'un maillage «adapté à la singularité» permet de pallier le manque de régularité. On obtient en effet, pour la norme H1, une convergence optimale des branches de solutions approchées vers les branches de solutions exactes
APA, Harvard, Vancouver, ISO, and other styles
26

Shikhar. "COMPRESSIVE IMAGING FOR DIFFERENCE IMAGE FORMATION AND WIDE-FIELD-OF-VIEW TARGET TRACKING." Diss., The University of Arizona, 2010. http://hdl.handle.net/10150/194741.

Full text
Abstract:
Use of imaging systems for performing various situational awareness tasks in militaryand commercial settings has a long history. There is increasing recognition,however, that a much better job can be done by developing non-traditional opticalsystems that exploit the task-specific system aspects within the imager itself. Insome cases, a direct consequence of this approach can be real-time data compressionalong with increased measurement fidelity of the task-specific features. In others,compression can potentially allow us to perform high-level tasks such as direct trackingusing the compressed measurements without reconstructing the scene of interest.In this dissertation we present novel advancements in feature-specific (FS) imagersfor large field-of-view surveillence, and estimation of temporal object-scene changesutilizing the compressive imaging paradigm. We develop these two ideas in parallel.In the first case we show a feature-specific (FS) imager that optically multiplexesmultiple, encoded sub-fields of view onto a common focal plane. Sub-field encodingenables target tracking by creating a unique connection between target characteristicsin superposition space and the target's true position in real space. This isaccomplished without reconstructing a conventional image of the large field of view.System performance is evaluated in terms of two criteria: average decoding time andprobability of decoding error. We study these performance criteria as a functionof resolution in the encoding scheme and signal-to-noise ratio. We also includesimulation and experimental results demonstrating our novel tracking method. Inthe second case we present a FS imager for estimating temporal changes in the objectscene over time by quantifying these changes through a sequence of differenceimages. The difference images are estimated by taking compressive measurementsof the scene. Our goals are twofold. First, to design the optimal sensing matrixfor taking compressive measurements. In scenarios where such sensing matrices arenot tractable, we consider plausible candidate sensing matrices that either use theavailable a priori information or are non-adaptive. Second, we develop closed-form and iterative techniques for estimating the difference images. We present results to show the efficacy of these techniques and discuss the advantages of each.
APA, Harvard, Vancouver, ISO, and other styles
27

Marushkevych, Dmytro. "Asymptotic study of covariance operator of fractional processes : analytic approach with applications." Thesis, Le Mans, 2019. http://www.theses.fr/2019LEMA1010/document.

Full text
Abstract:
Les problèmes aux valeurs et fonctions propres surviennent fréquemment dans la théorie et dans les applications des processus stochastiques. Cependant quelques-uns seulement admettent une solution explicite; la résolution est alors généralement obtenue par la théorie généralisée de Sturm-Liouville pour les opérateurs différentiels. Les problèmes plus généraux ne peuvent pas être résolus sous une forme fermée et le sujet de cette thèse est l'analyse spectrale asymptotique des processus gaussiens fractionnaires et ses applications. Dans la première partie, nous développons une méthodologie pour l'analyse spectrale des opérateurs de covariance de type fractionnaire, correspondant à une famille importante de processus, incluant le processus fractionnaire d'Ornstein-Uhlenbeck, le mouvement brownien fractionnaire intégré et le mouvement brownien fractionnaire mixte. Nous obtenons des approximations asymptotiques du second ordre pour les valeurs propres et les fonctions propres. Au chapitre 2, nous considérons le problème aux valeurs et fonctions propres pour l'opérateur de covariance des ponts gaussiens. Nous montrons comment l'asymptotique spectrale d'un pont peut être dérivée de celle de son processus de base, en prenant comme exemple le cas du pont brownien fractionnaire. Dans la dernière partie, nous considérons trois applications représentatives de la théorie développée: le problème de filtrage des signaux gaussiens fractionnaires dans le bruit blanc, le problème de grande déviation pour le processus d'Ornstein-Uhlenbeck gouverné par un mouvement brownien fractionnaire mixte et probabilités des petites boules pour les processus gaussiens fractionnaires
Eigenproblems frequently arise in theory and applications of stochastic processes, but only a few have explicit solutions. Those which do are usually solved by reduction to the generalized Sturm-Liouville theory for differential operators.The more general eigenproblems are not solvable in closed form and the subject of this thesis is the asymptotic spectral analysis of the fractional Gaussian processes and its applications.In the first part, we develop methodology for the spectral analysis of the fractional type covariance operators, corresponding to an important family of processes that includes the fractional Ornstein-Uhlenbeck process, the integrated fractional Brownian motion and the mixed fractional Brownian motion. We obtain accurate second order asymptotic approximations for both the eigenvalues and the eigenfunctions. In Chapter 2 we consider the covariance eigenproblem for Gaussian bridges. We show how the spectral asymptotics of a bridge can bederived from that of its base process, considering, as an example, the case of the fractional Brownian bridge. In the final part we consider three representative applications of the developed theory: filtering problem of fractional Gaussian signals in white noise, large deviation properties of the maximum likelihood drift parameter estimator for the Ornstein-Uhlenbeck process driven by mixed fractional Brownian motion and small ball probabilities for the fractional Gaussian processes
APA, Harvard, Vancouver, ISO, and other styles
28

Corrêa, Eduardo Dias. "Estimativas a priori para problemas não lineares de condução de calor, com o uso da transformada de Kirchhoff." Universidade do Estado do Rio de Janeiro, 2014. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=7693.

Full text
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Este trabalho apresenta uma estimativa a priori para o limite superior da distribuição de temperatura considerando um problema em regime permanente em um corpo com uma condutividade térmica dependente da temperatura. A discussão é realizada supondo que as condições de contorno são lineares (lei de Newton do resfriamento) e que a condutividade térmica é constante por partes (quando considerada como uma função da temperatura). Estas estimativas consistem em uma ferramenta poderosa que pode prescindir da necessidade de uma simulação numérica cara de um problema de transferência de calor não linear, sempre que for suficiente conhecer o valor mais alto de temperatura. Nestes casos, a metodologia proposta neste trabalho é mais eficaz do que as aproximações usuais que assumem tanto a condutividade térmica quanto as fontes de calor como constantes.
This article presents an a priori upper bound estimate for the steady-state temperature distribution in a body with a temperature-dependent thermal conductivity. The discussion is carried out assuming linear boundary conditions (Newton law of cooling) and a piecewise constant thermal conductivity (when regarded as a function of the temperature). These estimates consist of a powerful tool that may circumvent an expensive numerical simulation of a nonlinear heat transfer problem, whenever it suffices to know the highest temperature value. In these cases the methodology proposed in this work is more effective than the usual approximations that assume thermal conductivities and heat sources as constants.
APA, Harvard, Vancouver, ISO, and other styles
29

Iwaza, Lana, and Lana Iwaza. "Joint Source-Network Coding & Decoding." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00855787.

Full text
Abstract:
While network data transmission was traditionally accomplished via routing, network coding (NC) broke this rule by allowing network nodes to perform linear combinations of the upcoming data packets. Network operations are performed in a specific Galois field of fixed size q. Decoding only involves a Gaussian elimination with the received network-coded packets. However, in practical wireless environments, NC might be susceptible to transmission errors caused by noise, fading, or interference. This drawback is quite problematic for real-time applications, such as multimediacontent delivery, where timing constraints may lead to the reception of an insufficient number of packets and consequently to difficulties in decoding the transmitted sources. At best, some packets can be recovered, while in the worst case, the receiver is unable to recover any of the transmitted packets.In this thesis, we propose joint source-network coding and decoding schemes in the purpose of providing an approximate reconstruction of the source in situations where perfect decoding is not possible. The main motivation comes from the fact that source redundancy can be exploited at the decoder in order to estimate the transmitted packets, even when some of them are missing. The redundancy can be either natural, i.e, already existing, or artificial, i.e, externally introduced.Regarding artificial redundancy, we choose multiple description coding (MDC) as a way of introducing structured correlation among uncorrelated packets. By combining MDC and NC, we aim to ensure a reconstruction quality that improves gradually with the number of received network-coded packets. We consider two different approaches for generating descriptions. The first technique consists in generating multiple descriptions via a real-valued frame expansion applied at the source before quantization. Data recovery is then achieved via the solution of a mixed integerlinear problem. The second technique uses a correlating transform in some Galois field in order to generate descriptions, and decoding involves a simple Gaussian elimination. Such schemes are particularly interesting for multimedia contents delivery, such as video streaming, where quality increases with the number of received descriptions.Another application of such schemes would be multicasting or broadcasting data towards mobile terminals experiencing different channel conditions. The channel is modeled as a binary symmetric channel (BSC) and we study the effect on the decoding quality for both proposed schemes. Performance comparison with a traditional NC scheme is also provided.Concerning natural redundancy, a typical scenario would be a wireless sensor network, where geographically distributed sources capture spatially correlated measures. We propose a scheme that aims at exploiting this spatial redundancy, and provide an estimation of the transmitted measurement samples via the solution of an integer quadratic problem. The obtained reconstruction quality is compared with the one provided by a classical NC scheme.
APA, Harvard, Vancouver, ISO, and other styles
30

Přibyl, Bronislav. "Odhad pózy kamery z přímek pomocí přímé lineární transformace." Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2018. http://www.nusl.cz/ntk/nusl-412595.

Full text
Abstract:
Tato disertační práce se zabývá odhadem pózy kamery z korespondencí 3D a 2D přímek, tedy tzv. perspektivním problémem n  přímek (angl. Perspective- n -Line, PnL). Pozornost je soustředěna na případy s velkým počtem čar, které mohou být efektivně řešeny metodami využívajícími lineární formulaci PnL. Dosud byly známy pouze metody pracující s korespondencemi 3D bodů a 2D přímek. Na základě tohoto pozorování byly navrženy dvě nové metody založené na algoritmu přímé lineární transformace (angl. Direct Linear Transformation, DLT): Metoda DLT-Plücker-Lines pracující s korespondencemi 3D a 2D přímek a metoda DLT-Combined-Lines pracující jak s korespondencemi 3D bodů a 2D přímek, tak s korespondencemi 3D přímek a 2D přímek. Ve druhém případě je redundantní 3D informace využita k redukci minimálního počtu požadovaných korespondencí přímek na 5 a ke zlepšení přesnosti metody. Navržené metody byly důkladně testovány za různých podmínek včetně simulovaných a reálných dat a porovnány s nejlepšími existujícími PnL metodami. Metoda DLT-Combined-Lines dosahuje výsledků lepších nebo srovnatelných s nejlepšími existujícími metodami a zároveň je značně rychlá. Tato disertační práce také zavádí jednotný rámec pro popis metod pro odhad pózy kamery založených na algoritmu DLT. Obě navržené metody jsou definovány v tomto rámci.
APA, Harvard, Vancouver, ISO, and other styles
31

Braga, Josà Ederson Melo. "Problemas variacionais de fronteira livre com duas fases e resultados do tipo PhragmÃn-Lindelof regidos por equaÃÃes elÃpticas nÃo lineares singulares/degeneradas." Universidade Federal do CearÃ, 2015. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=15623.

Full text
Abstract:
Neste trabalho de tese discutimos resultados recentes sobre a regularidade e propriedades geomÃtricas de soluÃÃes variacionais de problemas de fronteira livre de duas fases regidos por equaÃÃes elÃpticas nÃo lineares degeneradas/singulares. Discutimos tambÃm resultados do tipo PhragmÃm-Lindelof para tais equaÃÃes classificando essas soluÃÃes em semi-espaÃos.
Neste trabalho de tese discutimos resultados recentes sobre a regularidade e propriedades geomÃtricas de soluÃÃes variacionais de problemas de fronteira livre de duas fases regidos por equaÃÃes elÃpticas nÃo lineares degeneradas/singulares. Discutimos tambÃm resultados do tipo PhragmÃm-Lindelof para tais equaÃÃes classificando essas soluÃÃes em semi-espaÃos.
In this work of thesis we discuss recents results on the regularity and geometric properties of variational solutions of two phase free boundary problems governed by singular/degenerate nonlinear elliptic equations. We also discuss PhragmÃn-Lindelof type results for such equations classifying those solutions in half spaces.
In this work of thesis we discuss recents results on the regularity and geometric properties of variational solutions of two phase free boundary problems governed by singular/degenerate nonlinear elliptic equations. We also discuss PhragmÃn-Lindelof type results for such equations classifying those solutions in half spaces.
APA, Harvard, Vancouver, ISO, and other styles
32

Schorsch, Julien. "Contributions à l'estimation paramétrique des modèles décrits par les équations aux dérivées partielles." Phd thesis, Université de Lorraine, 2013. http://tel.archives-ouvertes.fr/tel-00913579.

Full text
Abstract:
Les systèmes décrits par les équations aux dérivées partielles, appartiennent à la classe des systèmes dynamiques impliquant des fonctions dépendantes de plusieurs variables, comme le temps et l'espace. Déjà fortement répandus pour la modélisation mathématique de phénomènes physiques et environnementaux, ces systèmes ont un rôle croissant dans les domaines de l'automatique. Cette expansion, provoquée par les avancées technologiques au niveau des capteurs facilitant l'acquisition de données et par les nouveaux enjeux environnementaux, incite au développement de nouvelles thématiques de recherche. L'une de ces thématiques, est l'étude des problèmes inverses et plus particulièrement l'identification paramétrique des équations aux dérivées partielles. Tout abord, une description détaillée des différentes classes de systèmes décrits par ces équations est présentée puis les problèmes d'identification qui leur sont associés sont soulevés. L'accent est mis sur l'estimation paramétrique des équations linéaires, homogènes ou non, et sur les équations linéaires à paramètres variant. Un point commun à ces problèmes d'identification réside dans le caractère bruité et échantillonné des mesures de la sortie. Pour ce faire, deux types d'outils principaux ont été élaborés. Certaines techniques de discrétisation spatio-temporelle ont été utilisées pour faire face au caractère échantillonné des données; les méthodes de variable instrumentale, pour traiter le problème lié à la présence de bruit de mesure. Les performances de ces méthodes ont été évaluées selon des protocoles de simulation numérique reproduisant des situations réalistes de phénomènes physique et environnementaux, comme la diffusion de polluant dans une rivière.
APA, Harvard, Vancouver, ISO, and other styles
33

Bringmann, Philipp. "Adaptive least-squares finite element method with optimal convergence rates." Doctoral thesis, Humboldt-Universität zu Berlin, 2021. http://dx.doi.org/10.18452/22350.

Full text
Abstract:
Die Least-Squares Finite-Elemente-Methoden (LSFEMn) basieren auf der Minimierung des Least-Squares-Funktionals, das aus quadrierten Normen der Residuen eines Systems von partiellen Differentialgleichungen erster Ordnung besteht. Dieses Funktional liefert einen a posteriori Fehlerschätzer und ermöglicht die adaptive Verfeinerung des zugrundeliegenden Netzes. Aus zwei Gründen versagen die gängigen Methoden zum Beweis optimaler Konvergenzraten, wie sie in Carstensen, Feischl, Page und Praetorius (Comp. Math. Appl., 67(6), 2014) zusammengefasst werden. Erstens scheinen fehlende Vorfaktoren proportional zur Netzweite den Beweis einer schrittweisen Reduktion der Least-Squares-Schätzerterme zu verhindern. Zweitens kontrolliert das Least-Squares-Funktional den Fehler der Fluss- beziehungsweise Spannungsvariablen in der H(div)-Norm, wodurch ein Datenapproximationsfehler der rechten Seite f auftritt. Diese Schwierigkeiten führten zu einem zweifachen Paradigmenwechsel in der Konvergenzanalyse adaptiver LSFEMn in Carstensen und Park (SIAM J. Numer. Anal., 53(1), 2015) für das 2D-Poisson-Modellproblem mit Diskretisierung niedrigster Ordnung und homogenen Dirichlet-Randdaten. Ein neuartiger expliziter residuenbasierter Fehlerschätzer ermöglicht den Beweis der Reduktionseigenschaft. Durch separiertes Markieren im adaptiven Algorithmus wird zudem der Datenapproximationsfehler reduziert. Die vorliegende Arbeit verallgemeinert diese Techniken auf die drei linearen Modellprobleme das Poisson-Problem, die Stokes-Gleichungen und das lineare Elastizitätsproblem. Die Axiome der Adaptivität mit separiertem Markieren nach Carstensen und Rabus (SIAM J. Numer. Anal., 55(6), 2017) werden in drei Raumdimensionen nachgewiesen. Die Analysis umfasst Diskretisierungen mit beliebigem Polynomgrad sowie inhomogene Dirichlet- und Neumann-Randbedingungen. Abschließend bestätigen numerische Experimente mit dem h-adaptiven Algorithmus die theoretisch bewiesenen optimalen Konvergenzraten.
The least-squares finite element methods (LSFEMs) base on the minimisation of the least-squares functional consisting of the squared norms of the residuals of first-order systems of partial differential equations. This functional provides a reliable and efficient built-in a posteriori error estimator and allows for adaptive mesh-refinement. The established convergence analysis with rates for adaptive algorithms, as summarised in the axiomatic framework by Carstensen, Feischl, Page, and Praetorius (Comp. Math. Appl., 67(6), 2014), fails for two reasons. First, the least-squares estimator lacks prefactors in terms of the mesh-size, what seemingly prevents a reduction under mesh-refinement. Second, the first-order divergence LSFEMs measure the flux or stress errors in the H(div) norm and, thus, involve a data resolution error of the right-hand side f. These difficulties led to a twofold paradigm shift in the convergence analysis with rates for adaptive LSFEMs in Carstensen and Park (SIAM J. Numer. Anal., 53(1), 2015) for the lowest-order discretisation of the 2D Poisson model problem with homogeneous Dirichlet boundary conditions. Accordingly, some novel explicit residual-based a posteriori error estimator accomplishes the reduction property. Furthermore, a separate marking strategy in the adaptive algorithm ensures the sufficient data resolution. This thesis presents the generalisation of these techniques to three linear model problems, namely, the Poisson problem, the Stokes equations, and the linear elasticity problem. It verifies the axioms of adaptivity with separate marking by Carstensen and Rabus (SIAM J. Numer. Anal., 55(6), 2017) in three spatial dimensions. The analysis covers discretisations with arbitrary polynomial degree and inhomogeneous Dirichlet and Neumann boundary conditions. Numerical experiments confirm the theoretically proven optimal convergence rates of the h-adaptive algorithm.
APA, Harvard, Vancouver, ISO, and other styles
34

Richardson, Alice. "Some problems in estimation in mixed linear models." Phd thesis, 1995. http://hdl.handle.net/1885/133644.

Full text
Abstract:
This thesis is concerned with the properties of classical estimators of the parameters in mixed linear models, the development of robust estimators, and the properties and uses of these estimators. The first chapter contains a review of estimation in mixed linear models, and a description of four data sets that are used to illustrate the methods discussed. In the second chapter, some results about the asymptotic distribution of the restricted maximum likelihood (REML) estimator of variance components are stated and proven. Some asymptotic results are also stated and proven for the associated weighted least squares estimtor of fixed effects. Central limit theorems are obtained using elementary arguments with only mild conditions on the covariates in the fixed part of the model and without having to assume that the data are either normally or spherically symmetrically distributed. It is also shown that the REML and maximum likelihood estimators of variance components are asymptotically equivalent. Robust estimators are proposed in the third chapter. These estimators are M - estimators constructed by applying weight functions to the log-likelihood, the restricted log-likelihood or the associated estimating equations. These functions reduce the influence of outlying observations on the parameter estimates. Other suggestions for robust estimators are also discussed, including Fellner's method. It is shown that Fellner's method is a direct robustification of the REML estimating equations, as well as being a robust version of Harville's algorithm, which in turn is equivalent to the expectation-maximisation (EM) algorithm of Dempster, Laird and Rubin. The robust estimators are then modified in the fourth chapter to define bounded influence estimators, also known as generalised M or GM estimators in the linear regression model. Outlying values of both the dependent variable and continuous independent variables are downweighted, creating estimators which are analogous to the GM estimators of Mallows and Schweppe. Some general results on the asymptotic properties of bounded influence estimators (of which maximum likelihood, REML and the robust methods of Chapter 3 are all special cases) are stated and proven. The method of proof is similar to that employed for the classical estimators in Chapter 2. Chapter 5 is concerned with the practical problem of selecting covariates in mixed linear models. In particular, a change of deviance statistic is proposed which provides an alternative to likelihood ratio test methodology and which can be applied in situations where the components of variance are estimated by REML. The deviance is specified by the procedure used to estimate the fixed effects and the estimated covariance matrix is held fixed across different models for the fixed effects. The distribution of the change of deviance is derived, and a robustification of the change of deviance is given. Finally, in Chapter 6 a simulation study is undertaken to investigate the asymptotic properties of the proposed estimators in samples of moderate size. The empirical influence function of some of the estimators is studied, as is the distribution of the change of deviance statistic. Issues surrounding bounded influence estimation when there are outliers in the independent variables are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
35

Suliman, Mohamed Abdalla Elhag. "Regularization Techniques for Linear Least-Squares Problems." Thesis, 2016. http://hdl.handle.net/10754/609160.

Full text
Abstract:
Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA method deals with discrete ill-posed problems when the singular values of the linear transformation matrix are decaying very fast to a significantly small value. For the both proposed algorithms, the regularization parameter is obtained as a solution of a non-linear characteristic equation. We provide a details study for the general properties of these functions and address the existence and uniqueness of the root. To demonstrate the performance of the derivations, the first proposed COPRA method is applied to estimate different signals with various characteristics, while the second proposed COPRA method is applied to a large set of different real-world discrete ill-posed problems. Simulation results demonstrate that the two proposed methods outperform a set of benchmark regularization algorithms in most cases. In addition, the algorithms are also shown to have the lowest run time.
APA, Harvard, Vancouver, ISO, and other styles
36

BALLERINI, VERONICA. "Fisher's noncentral hypergeometric distribution and population size estimation problems." Doctoral thesis, 2021. http://hdl.handle.net/11573/1563206.

Full text
Abstract:
Fisher’s noncentral hypergeometric distribution (FNCH) describes a biased urn experiment with independent draws of differently coloured balls where each colour is associated with a different weight (Fisher (1935), Fog (2008a)). FNCH potentially suits many official statistics problems. However, such distribution has been underemployed in the statistical literature mainly because of the computational burden given by its probability mass function. Indeed, as the number of draws and the number of different categories in the population increases, any method involving evaluating the likelihood is practically unfeasible. In the first part of this work, we present a methodology to estimate the posterior distribution of the population size, exploiting both the possibility of including extra-experimental information and the computational efficiency of MCMC and ABC methods. The second part devotes particular attention to overcoverage, i.e., the possibility that one or more data sources erroneously include some out-of-scope units. After a critical review of the most recent literature, we present an alternative modelisation of the latent erroneous counts in a capture-recapture framework, simultaneously addressing overcoverage and undercoverage problems. We show the utility of FNCH in this context, both in the posterior sampling process and in the elicitation of prior distributions. We rely on the PCI assumption of Zhang (2019) to include non-negligible prior information. Finally, we address model selection, which is not trivial in the framework of log-linear models when there are a few (or even zero) degrees of freedom.
APA, Harvard, Vancouver, ISO, and other styles
37

Matić, Rada. "Estimation Problems Related to Random Matrix Ensembles." Doctoral thesis, 2006. http://hdl.handle.net/11858/00-1735-0000-0006-B406-B.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Abarin, Taraneh. "Second-order least squares estimation in regression models with application to measurement error problems." 2009. http://hdl.handle.net/1993/3126.

Full text
Abstract:
This thesis studies the Second-order Least Squares (SLS) estimation method in regression models with and without measurement error. Applications of the methodology in general quasi-likelihood and variance function models, censored models, and linear and generalized linear models are examined and strong consistency and asymptotic normality are established. To overcome the numerical difficulties of minimizing an objective function that involves multiple integrals, a simulation-based SLS estimator is used and its asymptotic properties are studied. Finite sample performances of the estimators in all of the studied models are investigated through simulation studies.
February 2009
APA, Harvard, Vancouver, ISO, and other styles
39

Sawlan, Zaid A. "Statistical Analysis and Bayesian Methods for Fatigue Life Prediction and Inverse Problems in Linear Time Dependent PDEs with Uncertainties." Diss., 2018. http://hdl.handle.net/10754/629731.

Full text
Abstract:
This work employs statistical and Bayesian techniques to analyze mathematical forward models with several sources of uncertainty. The forward models usually arise from phenomenological and physical phenomena and are expressed through regression-based models or partial differential equations (PDEs) associated with uncertain parameters and input data. One of the critical challenges in real-world applications is to quantify uncertainties of the unknown parameters using observations. To this purpose, methods based on the likelihood function, and Bayesian techniques constitute the two main statistical inferential approaches considered here. Two problems are studied in this thesis. The first problem is the prediction of fatigue life of metallic specimens. The second part is related to inverse problems in linear PDEs. Both problems require the inference of unknown parameters given certain measurements. We first estimate the parameters by means of the maximum likelihood approach. Next, we seek a more comprehensive Bayesian inference using analytical asymptotic approximations or computational techniques. In the fatigue life prediction, there are several plausible probabilistic stress-lifetime (S-N) models. These models are calibrated given uniaxial fatigue experiments. To generate accurate fatigue life predictions, competing S-N models are ranked according to several classical information-based measures. A different set of predictive information criteria is then used to compare the candidate Bayesian models. Moreover, we propose a spatial stochastic model to generalize S-N models to fatigue crack initiation in general geometries. The model is based on a spatial Poisson process with an intensity function that combines the S-N curves with an averaged effective stress that is computed from the solution of the linear elasticity equations.
APA, Harvard, Vancouver, ISO, and other styles
40

Pester, Cornelia [Verfasser]. "A posteriori error estimation for non-linear eigenvalue problems for differential operators of second order with focus on 3D vertex singularities / vorgelegt von Cornelia Pester." 2006. http://d-nb.info/980933056/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Scoleri, Tony. "Fundamental numerical schemes for parameter estimation in computer vision." 2008. http://hdl.handle.net/2440/50726.

Full text
Abstract:
An important research area in computer vision is parameter estimation. Given a mathematical model and a sample of image measurement data, key parameters are sought to encapsulate geometric properties of a relevant entity. An optimisation problem is often formulated in order to find these parameters. This thesis presents an elaboration of fundamental numerical algorithms for estimating parameters of multi-objective models of importance in computer vision applications. The work examines ways to solve unconstrained and constrained minimisation problems from the view points of theory, computational methods, and numerical performance. The research starts by considering a particular form of multi-equation constraint function that characterises a wide class of unconstrained optimisation tasks. Increasingly sophisticated cost functions are developed within a consistent framework, ultimately resulting in the creation of a new iterative estimation method. The scheme operates in a maximum likelihood setting and yields near-optimal estimate of the parameters. Salient features of themethod are that it has simple update rules and exhibits fast convergence. Then, to accommodate models with functional dependencies, two variant of this initial algorithm are proposed. These methods are improved again by reshaping the objective function in a way that presents the original estimation problem in a reduced form. This procedure leads to a novel algorithm with enhanced stability and convergence properties. To extend the capacity of these schemes to deal with constrained optimisation problems, several a posteriori correction techniques are proposed to impose the so-called ancillary constraints. This work culminates by giving two methods which can tackle ill-conditioned constrained functions. The combination of the previous unconstrained methods with these post-hoc correction schemes provides an array of powerful constrained algorithms. The practicality and performance of themethods are evaluated on two specific applications. One is planar homography matrix computation and the other trifocal tensor estimation. In the case of fitting a homography to image data, only the unconstrained algorithms are necessary. For the problem of estimating a trifocal tensor, significant work is done first on expressing sets of usable constraints, especially the ancillary constraints which are critical to ensure that the computed object conforms to the underlying geometry. Evidently here, the post-correction schemes must be incorporated in the computational mechanism. For both of these example problems, the performance of the unconstrained and constrained algorithms is compared to existing methods. Experiments reveal that the new methods perform with high accuracy to match a state-of-the-art technique but surpass it in execution speed.
Thesis (Ph.D.) - University of Adelaide, School of Mathemtical Sciences, Discipline of Pure Mathematics, 2008
APA, Harvard, Vancouver, ISO, and other styles
42

Liu, Te-Wei, and 劉得偉. "An Inverse Vibration Problem of Estimating Coefficients of Stiffness and Damping for the Multiple Degrees of Freedom System with Non-linear External Forces." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/13939892173889638651.

Full text
Abstract:
碩士
國立高雄應用科技大學
模具工程系
99
In this study the Beck’s nonlinear estimation procedure coupled with Runge-Kutta method is applied to solve the reverse of multiple degrees of freedom with nonlinear external force vibration problems. The sensitivity coefficients of Beck’s nonlinear estimation procedure are modified by the fourth-order Runge-Kutta numerical method, improves the accuracy of the program to accelerate the iteration convergence rate, and so reverse forecast multiple unknown stiffness and damping coefficients. Baker's estimate of the original program is a non-linear first-order Taylor series of precision, and this research is an amendment to Baker's non-linear estimation procedures to enhance the accuracy of four-order Taylor series. Beck's nonlinear estimation procedure is applied with Runge-Kutta numerical method, reverse to solve sets of unknown spring and damping factor, is a major innovative features of this study. The convergence rate is better than the original Baker's non-linear estimation procedures, is a major focus of development in this study. The results showed that the inverse algorithm for solving the cases of three non-linear force able to effectively improve the original Beck's nonlinear estimation procedure, the best convergence rate can be increased to 33.3%. From the numerical computation of the elasticity and damping, compare with the exact solutions under the measurement errors on the assumption that 1%, 5% and 10% of the conditions, the real percentage of error are less than 0.9%, 4.5% and 9.2%, and the measurement error is proportional to the percentage of the real error.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography