Dissertations / Theses on the topic 'Error approximation'

To see the other types of publications on this topic, follow the link: Error approximation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Error approximation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Liao, Qifeng. "Error estimation and stabilization for low order finite elements." Thesis, University of Manchester, 2010. https://www.research.manchester.ac.uk/portal/en/theses/error-estimation-and-stabilization-for-low-order-finite-elements(ba7fc33b-b154-404b-b608-fc8eeabd9e58).html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Qi. "Multilevel adaptive radial basis function approximation using error indicators." Thesis, University of Leicester, 2016. http://hdl.handle.net/2381/38284.

Full text
Abstract:
In some approximation problems, sampling from the target function can be both expensive and time-consuming. It would be convenient to have a method for indicating where the approximation quality is poor, so that generation of new data provides the user with greater accuracy where needed. In this thesis, the author describes a new adaptive algorithm for Radial Basis Function (RBF) interpolation which aims to assess the local approximation quality and adds or removes points as required to improve the error in the specified region. For a multiquadric and Gaussian approximation, one has the flexibility of a shape parameter which one can use to keep the condition number of the interpolation matrix to a moderate size. In this adaptive error indicator (AEI) method, an adaptive shape parameter is applied. Numerical results for test functions which appear in the literature are given for one, two, and three dimensions, to show that this method performs well. A turbine blade design problem form GE Power (Rugby, UK) is considered and the AEI method is applied to this problem. Moreover, a new multilevel approximation scheme is introduced in this thesis by coupling it with the adaptive error indicator. Preliminary numerical results from this Multilevel Adaptive Error Indicator (MAEI) approximation method are shown. These indicate that the MAEI is able to express the target function well. Moreover, it provides a highly efficient sampling.
APA, Harvard, Vancouver, ISO, and other styles
3

Huang, Fang-Lun. "Error analysis and tractability for multivariate integration and approximation." HKBU Institutional Repository, 2004. http://repository.hkbu.edu.hk/etd_ra/515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jain, Aashish. "Error Visualization in Comparison of B-Spline Surfaces." Thesis, Virginia Tech, 1999. http://hdl.handle.net/10919/35319.

Full text
Abstract:
Geometric trimming of surfaces results in a new mathematical description of the matching surface. This matching surface is required to closely resemble the remaining portion of the original surface. Typically, the approximation error in such cases is measured with a view to minimize it. The data associated with the error between two matching surfaces is large and needs to be filtered into meaningful information.This research looks at suitable norms for achieving this data reduction or abstraction with a view to provide quantitative feedback about the approximation error. Also, the differences between geometric shapes are easily discerned by the human eye but are difficult to characterize or describe. Error visualization tools have been developed to provide effective visual inputs that the designer can interpret into meaningful information.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
5

Dziegielewski, Andreas von [Verfasser]. "High precision swept volume approximation with conservative error bounds / Andreas von Dziegielewski." Mainz : Universitätsbibliothek Mainz, 2012. http://d-nb.info/1029217343/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Grepl, Martin A. (Martin Alexander) 1974. "Reduced-basis approximation a posteriori error estimation for parabolic partial differential equations." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/32387.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2005.
Includes bibliographical references (p. 243-251).
Modern engineering problems often require accurate, reliable, and efficient evaluation of quantities of interest, evaluation of which demands the solution of a partial differential equation. We present in this thesis a technique for the prediction of outputs of interest of parabolic partial differential equations. The essential ingredients are: (i) rapidly convergent reduced-basis approximations - Galerkin projection onto a space WN spanned by solutions of the governing partial differential equation at N selected points in parameter-time space; (ii) a posteriori error estimation - relaxations of the error-residual equation that provide rigorous and sharp bounds for the error in specific outputs of interest: the error estimates serve a priori to construct our samples and a posteriori to confirm fidelity; and (iii) offline-online computional procedures - in the offline stage the reduced- basis approximation is generated; in the online stage, given a new parameter value, we calculate the reduced-basis output and associated error bound. The operation count for the online stage depends only on N (typically small) and the parametric complexity of the problem; the method is thus ideally suited for repeated, rapid, reliable evaluation of input-output relationships in the many-query or real-time contexts. We first consider parabolic problems with affine parameter dependence and subsequently extend these results to nonaffine and certain classes of nonlinear parabolic problems.
(cont.) To this end, we introduce a collateral reduced-basis expansion for the nonaffine and nonlinear terms and employ an inexpensive interpolation procedure to calculate the coefficients for the function approximation - the approach permits an efficient offline-online computational decomposition even in the presence of nonaffine and highly nonlinear terms. Under certain restrictions on the function approximation, we also introduce rigorous a posteriori error estimators for nonaffine and nonlinear problems. Finally, we apply our methods to the solution of inverse and optimal control problems. While the efficient evaluation of the input-output relationship is essential for the real-time solution of these problems, the a posteriori error bounds let us pursue a robust parameter estimation procedure which takes into account the uncertainty due to measurement and reduced-basis modeling errors explicitly (and rigorously). We consider several examples: the nondestructive evaluation of delamination in fiber-reinforced concrete, the dispersion of pollutants in a rectangular domain, the self-ignition of a coal stockpile, and the control of welding quality. Numerical results illustrate the applicability of our methods in the many-query contexts of optimization, characterization, and control.
by Martin A. Grepl.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
7

White, Staci A. "Quantifying Model Error in Bayesian Parameter Estimation." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1433771825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Parker, William David. "Speeding Up and Quantifying Approximation Error in Continuum Quantum Monte Carlo Solid-State Calculations." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1284495775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Vail, Michelle Louise. "Error estimates for spaces arising from approximation by translates of a basic function." Thesis, University of Leicester, 2002. http://hdl.handle.net/2381/30519.

Full text
Abstract:
We look at aspects of error analysis for interpolation by translates of a basic function. In particular, we consider ideas of localisation and how they can be used to obtain improved error estimates. We shall consider certain seminorms and associated spaces of functions which arise in the study of such interpolation methods. These seminorms are naturally given in an indirect form, that is in terms of the Fourier Transform of the function rather than the function itself. Thus, they do not lend themselves to localisation. However, work by Levesley and Light [17] rewrites these seminorms in a direct form and thus gives a natural way of defining a local seminorm. Using this form of local seminorm we construct associated local spaces. We develop bounded, linear extension operators for these spaces and demonstrate how such extension operators can be used in developing improved error estimates. Specifically, we obtain improved L2 estimates for these spaces in terms of the spacing of the interpolation points. Finally, we begin a discussion of how this approach to localisation compares with alternatives.
APA, Harvard, Vancouver, ISO, and other styles
10

Rankin, Richard Andrew Robert. "Fully computable a posteriori error bounds for noncomforming and discontinuous galekin finite elemant approximation." Thesis, University of Strathclyde, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.501776.

Full text
Abstract:
We obtain fully computable constant free a posteriori error bounds on the broken energy seminorm of the error in nonconforming and discontinuous Galerkin finite element approximations of a linear second ore elliptic problem on meshes omprised of triangular elements. We do this for nonconforming finite element approximations of uniform arbitrary order as well as for non-uniform order symmetric interior penalty Galerkin, non-symmetric interior penalty Galerkin and ncomplete interior penalty Galerkin finite element approximations.
APA, Harvard, Vancouver, ISO, and other styles
11

Sen, Sugata 1977. "Reduced basis approximation and a posteriori error estimation for non-coercive elliptic problems : applications to acoustics." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/39355.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2007.
Includes bibliographical references (p. 251-261).
Modern engineering problems often require accurate, reliable, and efficient evaluation of quantities of interest, evaluation of which demands the solution of a partial differential equation. We present in this thesis a general methodology for the predicition of outputs of interest of non-coercive elliptic partial differential equations. The essential ingredients are: (i) rapidly convergent reduced basis approximations - Galerkin projection onto a space WN spanned by solutions of the governing partial differential equation at N selected points in parameter-time space; (ii) a posteriori error estimation - relaxations of the error-residual equation that provide rigorous and sharp bounds for the error in specific outputs of interest; and (iii) offline-online computational procedures - in the offline stage the reduced basis approximation is generated; in the online stage, given a new parameter value, we calculate the reduced basis output and associated error bound. The operation count for the online stage depends only on N (typically small) and the parametric complexity of the problem; the method is thus ideally suited for repeated, rapid, reliable evaluation of input-output relationships in the many-query or real-time contexts. We consider the crucial ingredients for the treatment of acoustics problems
(cont.) - simultaneous treatment of non-coercive (and near-resonant), non-Hermitian elliptic operators, complex-valued fields, often unbounded domains, and quadratic outputs of interest. We introduce the successive constraint approach to approximate lower bounds to the inf-sup stability constant, a key ingredient of our rigorous a posteriori output error estimator. We develop a novel expanded formulation that enables treatment of quadratic outputs as linear compliant outputs. We also build on existing ideas in domain truncation to develop a radiation boundary condition to truncate unbounded domains. We integrate the different theoretical contributions and apply our methods as proof of concept to some representative applications in acoustic filter design and characterization. In the online stage, we achieve O(10) computational economies of cost while demonstrating both the rapid convergence of the reduced basis approximation, and the sharpness of our error estimators ([approx.] O(20)). The obtained computational economies are expected to be significantly greater for problems of larger size. We thus emphasize the feasibility of our methods in the many-query contexts of optimization, characterization, and control.
by Sugata Sen.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
12

Hernandez, Moreno Andres Felipe. "A metamodeling approach for approximation of multivariate, stochastic and dynamic simulations." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43690.

Full text
Abstract:
This thesis describes the implementation of metamodeling approaches as a solution to approximate multivariate, stochastic and dynamic simulations. In the area of statistics, metamodeling (or ``model of a model") refers to the scenario where an empirical model is build based on simulated data. In this thesis, this idea is exploited by using pre-recorded dynamic simulations as a source of simulated dynamic data. Based on this simulated dynamic data, an empirical model is trained to map the dynamic evolution of the system from the current discrete time step, to the next discrete time step. Therefore, it is possible to approximate the dynamics of the complex dynamic simulation, by iteratively applying the trained empirical model. The rationale in creating such approximate dynamic representation is that the empirical models / metamodels are much more affordable to compute than the original dynamic simulation, while having an acceptable prediction error. The successful implementation of metamodeling approaches, as approximations of complex dynamic simulations, requires understanding of the propagation of error during the iterative process. Prediction errors made by the empirical model at earlier times of the iterative process propagate into future predictions of the model. The propagation of error means that the trained empirical model will deviate from the expensive dynamic simulation because of its own errors. Based on this idea, Gaussian process model is chosen as the metamodeling approach for the approximation of expensive dynamic simulations in this thesis. This empirical model was selected not only for its flexibility and error estimation properties, but also because it can illustrate relevant issues to be considered if other metamodeling approaches were used for this purpose.
APA, Harvard, Vancouver, ISO, and other styles
13

Ahlkrona, Josefin. "Computational Ice Sheet Dynamics : Error control and efficiency." Doctoral thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-283442.

Full text
Abstract:
Ice sheets, such as the Greenland Ice Sheet or Antarctic Ice Sheet, have a fundamental impact on landscape formation, the global climate system, and on sea level rise. The slow, creeping flow of ice can be represented by a non-linear version of the Stokes equations, which treat ice as a non-Newtonian, viscous fluid. Large spatial domains combined with long time spans and complexities such as a non-linear rheology, make ice sheet simulations computationally challenging. The topic of this thesis is the efficiency and error control of large simulations, both in the sense of mathematical modelling and numerical algorithms. In the first part of the thesis, approximative models based on perturbation expansions are studied. Due to a thick boundary layer near the ice surface, some classical assumptions are inaccurate and the higher order model called the Second Order Shallow Ice Approximation (SOSIA) yields large errors. In the second part of the thesis, the Ice Sheet Coupled Approximation Level (ISCAL) method is developed and implemented into the finite element ice sheet model Elmer/Ice. The ISCAL method combines the Shallow Ice Approximation (SIA) and Shelfy Stream Approximation (SSA) with the full Stokes model, such that the Stokes equations are only solved in areas where both the SIA and SSA is inaccurate. Where and when the SIA and SSA is applicable is decided automatically and dynamically based on estimates of the modeling error. The ISCAL method provides a significant speed-up compared to the Stokes model. The third contribution of this thesis is the introduction of Radial Basis Function (RBF) methods in glaciology. Advantages of RBF methods in comparison to finite element methods or finite difference methods are demonstrated.
eSSENCE
APA, Harvard, Vancouver, ISO, and other styles
14

Lux, Thomas Christian Hansen. "Interpolants, Error Bounds, and Mathematical Software for Modeling and Predicting Variability in Computer Systems." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/100059.

Full text
Abstract:
Function approximation is an important problem. This work presents applications of interpolants to modeling random variables. Specifically, this work studies the prediction of distributions of random variables applied to computer system throughput variability. Existing approximation methods including multivariate adaptive regression splines, support vector regressors, multilayer perceptrons, Shepard variants, and the Delaunay mesh are investigated in the context of computer variability modeling. New methods of approximation using Box splines, Voronoi cells, and Delaunay for interpolating distributions of data with moderately high dimension are presented and compared with existing approaches. Novel theoretical error bounds are constructed for piecewise linear interpolants over functions with a Lipschitz continuous gradient. Finally, a mathematical software that constructs monotone quintic spline interpolants for distribution approximation from data samples is proposed.
Doctor of Philosophy
It is common for scientists to collect data on something they are studying. Often scientists want to create a (predictive) model of that phenomenon based on the data, but the choice of how to model the data is a difficult one to answer. This work proposes methods for modeling data that operate under very few assumptions that are broadly applicable across science. Finally, a software package is proposed that would allow scientists to better understand the true distribution of their data given relatively few observations.
APA, Harvard, Vancouver, ISO, and other styles
15

Van, Langenhove Jan Willem. "Adaptive control of deterministic and stochastic approximation errors in simulations of compressible flow." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066357/document.

Full text
Abstract:
La simulation de systèmes d'ingénierie non linéaire complexes tels que les écoulements de fluide compressibles peut être ciblée pour rendre plus efficace et précise l'approximation d'une quantité spécifique (scalaire) d'intérêt du système. En mettant de côté l'erreur de modélisation et l'incertitude paramétrique, on peut y parvenir en combinant des estimations d'erreurs axées sur des objectifs et des raffinements adaptatifs de maillage spatial anisotrope. A cette fin, un cadre élégant et efficace est celui de l'adaptation dite basé-métrique où une estimation d'erreur a priori est utilisée comme indicateur d’adaptation de maillage. Dans cette thèse on propose une nouvelle extension de cette approche au cas des approximations de système portant une composante stochastique. Dans ce cas, un problème d'optimisation est formulé et résolu pour un meilleur contrôle des sources d'erreurs. Ce problème est posé dans le cadre continu de l'espace de métrique riemannien. Des développements algorithmiques sont également proposés afin de déterminer les sources dominates d’erreur et effectuer l’adaptation dans les espaces physique ou des paramètres incertains. L’approche proposé est testée sur divers problèmes comprenant une entrée de scramjet supersonique soumise à des incertitudes paramétriques géométriques et opérationnelles. Il est démontré que cette approche est capable de bien capturé les singularités dans l’escape stochastique, tout en équilibrant le budget de calcul et les raffinements de maillage dans les deux espaces
The simulation of complex nonlinear engineering systems such as compressible fluid flows may be targeted to make more efficient and accurate the approximation of a specific (scalar) quantity of interest of the system. Putting aside modeling error and parametric uncertainty, this may be achieved by combining goal-oriented error estimates and adaptive anisotropic spatial mesh refinements. To this end, an elegant and efficient framework is the one of (Riemannian) metric-based adaptation where a goal-based a priori error estimation is used as indicator for adaptivity. This thesis proposes a novel extension of this approach to the case of aforementioned system approximations bearing a stochastic component. In this case, an optimisation problem leading to the best control of the distinct sources of errors is formulated in the continuous framework of the Riemannian metric space. Algorithmic developments are also presented in order to quantify and adaptively adjust the error components in the deterministic and stochastic approximation spaces. The capability of the proposed method is tested on various problems including a supersonic scramjet inlet subject to geometrical and operational parametric uncertainties. It is demonstrated to accurately capture discontinuous features of stochastic compressible flows impacting pressure-related quantities of interest, while balancing computational budget and refinements in both spaces
APA, Harvard, Vancouver, ISO, and other styles
16

Richardson, Omar. "Mathematical analysis and approximation of a multiscale elliptic-parabolic system." Licentiate thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-68686.

Full text
Abstract:
We study a two-scale coupled system consisting of a macroscopic elliptic equation and a microscopic parabolic equation. This system models the interplay between a gas and liquid close to equilibrium within a porous medium with distributed microstructures. We use formal homogenization arguments to derive the target system. We start by proving well-posedness and inverse estimates for the two-scale system. We follow up by proposing a Galerkin scheme which is continuous in time and discrete in space, for which we obtain well-posedness, a priori error estimates and convergence rates. Finally, we propose a numerical error reduction strategy by refining the grid based on residual error estimators.
APA, Harvard, Vancouver, ISO, and other styles
17

Schweitzer, Marcel [Verfasser]. "Restarting and error estimation in polynomial and extended Krylov subspace methods for the approximation of matrix functions / Marcel Schweitzer." Wuppertal : Universitätsbibliothek Wuppertal, 2016. http://d-nb.info/1093601442/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Zaim, Yassine. "Approximation par éléments finis conformes et non conformes enrichis." Thesis, Pau, 2017. http://www.theses.fr/2017PAUU3001/document.

Full text
Abstract:
L’enrichissement des éléments finis standard est un outil performant pour améliorer la qualité d’approximation. L’idée principale de cette approche est d’ajouter aux fonctions de base un ensemble de fonctions censées améliorer la qualité des solutions approchées. Le choix de ces dernières est crucial et est en grande partie basé sur la connaissance a priori de quelques informations telles que les caractéristiques de la solution, de la géométrie du problème à résoudre, etc. L’efficacité de cette approche pour résoudre une équation aux dérivées partielles dans un maillage fixe, sans avoir recours au raffinement, a été prouvée dans de nombreuses applications dans la littérature. La clé de son succès repose principalement sur le bon choix des fonctions de base et plus particulièrement celui des fonctions d’enrichissement. Une question importante se pose alors : quelles conditions faut-il imposer sur les fonctions d’enrichissement afin qu’elles génèrent des éléments finis bien définis ?Dans cette thèse sont abordés différents aspects d’une approche générale d’enrichissement d’éléments finis. Notre première contribution porte principalement sur l’enrichissement de l’élément fini du type Q_1. Par contre, notre seconde contribution, certainement la plus importante, met l’accent sur une approche plus générale pour enrichir n’importe quel élément fini qu’il soit P_k, Q_k ou autres, conformes ou non conformes. Cette approche a conduit à l’obtention des versions enrichies de l’élément de Han, l’élément de Rannacher-Turek et l’élément de Wilson, qui font maintenant partie des codes d’éléments finis les plus couramment utilisés en milieu industriel. Pour établir ces extensions, nous avons eu recours à l’élaboration de nouvelles formules de quadrature multidimensionnelles appropriées généralisant les formules classiques bien connues en dimension 1, dites du “point milieu,” des “trapèzes” et de leurs versions perturbées, ainsi que la formule de Simpson. Elles peuvent être vues comme des extensions naturelles de ces formules en dimension supérieure. Ces dernières, en plus de leurs tests numériques implémentés sous MATLAB, version R2016a, ont fait l’objet de notre troisième contribution. Nous mettons particulièrement l’accent sur la détermination explicite des meilleures constantes possibles apparaissant dans les estimations d’erreur pour ces formules d’intégration. Enfin, dans la quatrième contribution nous testons notre approche pour résoudre numériquement le problème d’élasticité linéaire à l’aide d’un maillage rectangulaire. Nous effectuons l’analyse numérique aussi bien l’analyse de l’erreur d’approximation et résultats de convergence que l’analyse de l’erreur de consistance. Nous montrons également comment cette dernière peut être établie à n’importe quel ordre, généralisant ainsi certains travaux menés dans le domaine. Nous réalisons la mise en œuvre de la méthode et donnons quelques résultats numériques établis à l’aide de la bibliothèque libre d’éléments finis GetFEM++, version 5.0. Le but principal de cette partie sert aussi bien à la validation de nos résultats théoriques, qu’à montrer comment notre approche permet d’élargir la gamme de choix des fonctions d’enrichissement. En outre, elle permet de montrer comment cette large gamme de choix peut aider à avoir des solutions optimales et également à améliorer la validité et la qualité de l’espace d’approximation enrichie
The enrichment of standard finite elements is a powerful tool to improve the quality of approximation. The main idea of this approach is to incorporate some additional functions on the set of basis functions. These latter are requested to improve the accuracy of the approximate solution. Their best choice is crucial and is based on the knowledge of some a priori information, such as the characteristics of the solution, the geometry of the problem to be solved, etc. The efficiency of such an approach for finding numerical solutions of partial differential equations using a fixed mesh, without recourse to refinement, was proved in numerous applications in the literature. However, the key to its success lies mainly on the best choice of the basis functions, and more particularly those of enrichment functions.An important question then arises: How to suitably choose them, in such a way that they generate a well-defined finite element ?In this thesis, we present a general approach that enables an enrichment of the finite element approximation. This was the subject of our first contribution, which was devoted to the enrichment of the classical Q_1 element, as a first step. As a second step, in our second contribution, we have developed a more general framework for enriching any finite element either P_k, Q_k or others, conforming or nonconforming. As an illustration of how to use this framework to build new enriched finite elements, we have introduced the extensions of some well-known nonconforming finite elements, notably, Han element, Rannacher-Turek element and Wilson element, which are now part of the main code of finite element methods. To establish these extensions, we have introduced a new family of multivariate versions of the classical trapezoidal, midpoint and Simpson rules. These latter, in addition to their numerical tests under MATLAB, version R2016a, have been the subject of our third contribution. They may be viewed as an extension of the well-known trapezoidal, midpoint and Simpson’s one-dimensional rules to higher dimensions. We particularly pay attention to the explicit expressions of the best possible constants appearing in the error estimates for these cubatute formulas. Finally, in the fourth contribution we apply our approach to numerically solving the linear elasticity problem based on a rectangular mesh. We carry out the numerical analysis of the approximation error and also for the consistency error, and show how the latter can be established to any order. This constitutes a generalization of some work already done in the field. In addition to our theoretical results, we have also made some numerical tests, which were achieved by using the GetFEM++ library, version 5.0. The aim of this contribution was not only to confirm our theoretical predictions, but also to show how the new developed framework allows us to expand the range of choices of enrichment functions. Furthermore, we have shown how this wide choices range can help us to improve some approximation properties and to get the optimal solutions for the particular problem of elasticity
APA, Harvard, Vancouver, ISO, and other styles
19

Mierswa, Alina Verfasser], and Klaus [Gutachter] [Deckelnick. "Error estimates for a finite difference approximation of mean curvature flow for surfaces of torus type / Alina Mierswa ; Gutachter: Klaus Deckelnick." Magdeburg : Universitätsbibliothek Otto-von-Guericke-Universität, 2020. http://d-nb.info/1222670747/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Mierswa, Alina [Verfasser], and Klaus [Gutachter] Deckelnick. "Error estimates for a finite difference approximation of mean curvature flow for surfaces of torus type / Alina Mierswa ; Gutachter: Klaus Deckelnick." Magdeburg : Universitätsbibliothek Otto-von-Guericke-Universität, 2020. http://d-nb.info/1222670747/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Maftei, Radu. "Analyse stochastique pour la simulation de particules lagrangiennes : application aux collisions de particules colloïdes." Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4130/document.

Full text
Abstract:
Cette thèse s'inscrit dans le cadre de la simulation de particules colloïdales. Plus précisément, nous nous intéressons aux particules dans un écoulement turbulent et modélisons leur dynamique par un processus lagrangien, leurs interactions comme des collisions parfaitement élastiques où l'influence de l'écoulement est modélisée par un terme de force sur la composante vitesse du système. En couplant les particules deux par deux et considérant leurs position et vitesse relatives, la collision parfaitement élastique devient une condition de réflexion spéculaire. Nous proposons un schéma de discrétisation en temps pour le système Lagrangien résultant avec des conditions aux bords spéculaires et prouvons que l'erreur faible diminue au plus linéairement dans le pas de discrétisation temporelle. La démonstration s’appuie sur des résultats de régularité de l'EDP Feynman-Kac et requiert une certaine régularité sur le terme de force. Nous expérimentons numériquement certaines conjectures, dont l’erreur faible diminuant linéairement pour des termes de force qui ne respectent pas les conditions du théorème. Nous testons le taux de convergence de l’erreur faible pour l’extrapolation de Romberg. Enfin, nous nous intéressons aux approximations Lagrangiennes/Browniennes en considérant un système Lagrangien où la composante vitesse se comporte comme un processus rapide. Nous contrôlons l'erreur faible entre la composante position du modèle Lagrangien et un processus de diffusion uniformément elliptique. Nous démontrons ensuite un contrôle similaire en introduisant une limite réfléchissante spéculaire sur le système Lagrangien et une réflexion appropriée sur la diffusion elliptique
This thesis broadly concerns colloidal particle simulation which plays an important role in understanding two-phase flows. More specifically, we track the particles inside a turbulent flow and model their dynamics as a stochastic process, their interactions as perfectly elastic collisions where the influence of the flow is modelled by a drift on the velocity term. By coupling each particle and considering their relative position and velocity, the perfectly elastic collision becomes a specular reflection condition. We put forward a time discretisation scheme for the resulting Lagrange system with specular boundary conditions and prove that the convergence rate of the weak error decreases at most linearly in the time discretisation step. The evidence is based on regularity results of the Feynman-Kac PDE and requires some regularity on the drift. We numerically experiment a series of conjectures, amongst which the weak error linearly decreasing for drifts that do not comply with the theorem conditions. We test the weak error convergence rate for a Richardson Romberg extrapolation. We finally deal with Lagrangian/Brownian approximations by considering a Lagrangian system where the velocity component behaves as a fast process. We control the weak error between the position of the Lagrangian system and an appropriately chosen uniformly elliptic diffusion process and subsequently prove a similar control by introducing a specular reflecting boundary on the Lagrangian and an appropriate reflection on the elliptic diffusion
APA, Harvard, Vancouver, ISO, and other styles
22

Tempone, Olariaga Raul. "Numerical Complexity Analysis of Weak Approximation of Stochastic Differential Equations." Doctoral thesis, KTH, Numerisk analys och datalogi, NADA, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3413.

Full text
Abstract:
The thesis consists of four papers on numerical complexityanalysis of weak approximation of ordinary and partialstochastic differential equations, including illustrativenumerical examples. Here by numerical complexity we mean thecomputational work needed by a numerical method to solve aproblem with a given accuracy. This notion offers a way tounderstand the efficiency of different numerical methods. The first paper develops new expansions of the weakcomputational error for Itˆo stochastic differentialequations using Malliavin calculus. These expansions have acomputable leading order term in a posteriori form, and arebased on stochastic flows and discrete dual backward problems.Beside this, these expansions lead to efficient and accuratecomputation of error estimates and give the basis for adaptivealgorithms with either deterministic or stochastic time steps.The second paper proves convergence rates of adaptivealgorithms for Itˆo stochastic differential equations. Twoalgorithms based either on stochastic or deterministic timesteps are studied. The analysis of their numerical complexitycombines the error expansions from the first paper and anextension of the convergence results for adaptive algorithmsapproximating deterministic ordinary differential equations.Both adaptive algorithms are proven to stop with an optimalnumber of time steps up to a problem independent factor definedin the algorithm. The third paper extends the techniques to theframework of Itˆo stochastic differential equations ininfinite dimensional spaces, arising in the Heath Jarrow Mortonterm structure model for financial applications in bondmarkets. Error expansions are derived to identify differenterror contributions arising from time and maturitydiscretization, as well as the classical statistical error dueto finite sampling. The last paper studies the approximation of linear ellipticstochastic partial differential equations, describing andanalyzing two numerical methods. The first method generates iidMonte Carlo approximations of the solution by sampling thecoefficients of the equation and using a standard Galerkinfinite elements variational formulation. The second method isbased on a finite dimensional Karhunen- Lo`eve approximation ofthe stochastic coefficients, turning the original stochasticproblem into a high dimensional deterministic parametricelliptic problem. Then, adeterministic Galerkin finite elementmethod, of either h or p version, approximates the stochasticpartial differential equation. The paper concludes by comparingthe numerical complexity of the Monte Carlo method with theparametric finite element method, suggesting intuitiveconditions for an optimal selection of these methods. 2000Mathematics Subject Classification. Primary 65C05, 60H10,60H35, 65C30, 65C20; Secondary 91B28, 91B70.
QC 20100825
APA, Harvard, Vancouver, ISO, and other styles
23

Chan, Ka Yan. "Applying the "split-ADC" architecture to a 16 bit, 1 MS/s differential successive approximation analog-to-digital converter." Worcester, Mass. : Worcester Polytechnic Institute, 2008. http://www.wpi.edu/Pubs/ETD/Available/etd-043008-164352/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Horvath, Matthew Steven. "Extension of Polar Format Scene Size Limits to Squinted Geometries." Wright State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=wright1334013246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Sandmark, David. "Navigation Strategies for Improved Positioning of Autonomous Vehicles." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-159830.

Full text
Abstract:
This report proposes three algorithms using model predictive control (MPC) in order to improve the positioning accuracy of an unmanned vehicle. The developed algorithms succeed in reducing the uncertainty in position by allowing the vehicle to deviate from a planned path, and can also handle the presence of occluding objects. To achieve this improvement, a compromise is made between following a predefined trajectory and maintaining good positioning accuracy. Due to the recent development of threats to systems using global navigation satellite systems to localise themselves, there is an increased need for methods of localisation that can function without relying on receiving signals from distant satellites. One example of such a system is a vehicle using a range-bearing sensor in combination with a map to localise itself. However, a system relying only on these measurements to estimate its position during a mission may get lost or gain an unacceptable level of uncertainty in its position estimates. Therefore, this thesis proposes a selection of algorithms that have been developed with the purpose of improving the positioning accuracy of such an autonomous vehicle without changing the available measurement equipment. These algorithms are: A nonlinear MPC solving an optimisation problem. A linear MPC using a linear approximation of the positioning uncertainty to reduce the computational complexity. A nonlinear MPC using a linear approximation (henceforth called the approximate MPC) of an underlying component of the positioning uncertainty in order to reduce computational complexity while still having good performance. The algorithms were evaluated in two different types of simulated scenarios in MATLAB. In these simulations, the nonlinear, linear and approximate MPC algorithms reduced the root mean squared positioning error by 20-25 %, 14-18 %, and 23-27 % respectively, compared to a reference path. It was found that the approximate MPC seems to have the best performance of the three algorithms in the examined scenarios, while the linear MPC may be used in the event that this is too computationally costly. The nonlinear MPC solving the full problem is a reasonable choice only in the case when computing power is not limited, or when the approximation used in the approximate MPC is too inaccurate for the application.
APA, Harvard, Vancouver, ISO, and other styles
26

Boiger, Wolfgang Josef. "Stabilised finite element approximation for degenerate convex minimisation problems." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2013. http://dx.doi.org/10.18452/16790.

Full text
Abstract:
Infimalfolgen nichtkonvexer Variationsprobleme haben aufgrund feiner Oszillationen häufig keinen starken Grenzwert in Sobolevräumen. Diese Oszillationen haben eine physikalische Bedeutung; Finite-Element-Approximationen können sie jedoch im Allgemeinen nicht auflösen. Relaxationsmethoden ersetzen die nichtkonvexe Energie durch ihre (semi)konvexe Hülle. Das entstehende makroskopische Modell ist degeneriert: es ist nicht strikt konvex und hat eventuell mehrere Minimalstellen. Die fehlende Kontrolle der primalen Variablen führt zu Schwierigkeiten bei der a priori und a posteriori Fehlerschätzung, wie der Zuverlässigkeits- Effizienz-Lücke und fehlender starker Konvergenz. Zur Überwindung dieser Schwierigkeiten erweitern Stabilisierungstechniken die relaxierte Energie um einen diskreten, positiv definiten Term. Bartels et al. (IFB, 2004) wenden Stabilisierung auf zweidimensionale Probleme an und beweisen dabei starke Konvergenz der Gradienten. Dieses Ergebnis ist auf glatte Lösungen und quasi-uniforme Netze beschränkt, was adaptive Netzverfeinerungen ausschließt. Die vorliegende Arbeit behandelt einen modifizierten Stabilisierungsterm und beweist auf unstrukturierten Netzen sowohl Konvergenz der Spannungstensoren, als auch starke Konvergenz der Gradienten für glatte Lösungen. Ferner wird der sogenannte Fluss-Fehlerschätzer hergeleitet und dessen Zuverlässigkeit und Effizienz gezeigt. Für Interface-Probleme mit stückweise glatter Lösung wird eine Verfeinerung des Fehlerschätzers entwickelt, die den Fehler der primalen Variablen und ihres Gradienten beschränkt und so starke Konvergenz der Gradienten sichert. Der verfeinerte Fehlerschätzer konvergiert schneller als der Fluss- Fehlerschätzer, und verringert so die Zuverlässigkeits-Effizienz-Lücke. Numerische Experimente mit fünf Benchmark-Tests der Mikrostruktursimulation und Topologieoptimierung ergänzen und bestätigen die theoretischen Ergebnisse.
Infimising sequences of nonconvex variational problems often do not converge strongly in Sobolev spaces due to fine oscillations. These oscillations are physically meaningful; finite element approximations, however, fail to resolve them in general. Relaxation methods replace the nonconvex energy with its (semi)convex hull. This leads to a macroscopic model which is degenerate in the sense that it is not strictly convex and possibly admits multiple minimisers. The lack of control on the primal variable leads to difficulties in the a priori and a posteriori finite element error analysis, such as the reliability-efficiency gap and no strong convergence. To overcome these difficulties, stabilisation techniques add a discrete positive definite term to the relaxed energy. Bartels et al. (IFB, 2004) apply stabilisation to two-dimensional problems and thereby prove strong convergence of gradients. This result is restricted to smooth solutions and quasi-uniform meshes, which prohibit adaptive mesh refinements. This thesis concerns a modified stabilisation term and proves convergence of the stress and, for smooth solutions, strong convergence of gradients, even on unstructured meshes. Furthermore, the thesis derives the so-called flux error estimator and proves its reliability and efficiency. For interface problems with piecewise smooth solutions, a refined version of this error estimator is developed, which provides control of the error of the primal variable and its gradient and thus yields strong convergence of gradients. The refined error estimator converges faster than the flux error estimator and therefore narrows the reliability-efficiency gap. Numerical experiments with five benchmark examples from computational microstructure and topology optimisation complement and confirm the theoretical results.
APA, Harvard, Vancouver, ISO, and other styles
27

Paditz, Ludwig. "On the error-bound in the nonuniform version of Esseen's inequality in the Lp-metric." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-112888.

Full text
Abstract:
The aim of this paper is to investigate the known nonuniform version of Esseen's inequality in the Lp-metric, to get a numerical bound for the appearing constant L. For a long time the results given by several authors constate the impossibility of a nonuniform estimation in the most interesting case δ=1, because the effect L=L(δ)=O(1/(1-δ)), δ->1-0, was observed, where 2+δ, 0<δ<1, is the order of the assumed moments of the considered independent random variables X_k, k=1,2,...,n. Again making use of the method of conjugated distributions, we improve the well-known technique to show in the most interesting case δ=1 the finiteness of the absolute constant L and to prove L=L(1)=<127,74*7,31^(1/p), p>1. In the case 0<δ<1 we only give the analytical structure of L but omit numerical calculations. Finally an example on normal approximation of sums of l_2-valued random elements demonstrates the application of the nonuniform mean central limit bounds obtained here
Das Anliegen dieses Artikels besteht in der Untersuchung einer bekannten Variante der Esseen'schen Ungleichung in Form einer ungleichmäßigen Fehlerabschätzung in der Lp-Metrik mit dem Ziel, eine numerische Abschätzung für die auftretende absolute Konstante L zu erhalten. Längere Zeit erweckten die Ergebnisse, die von verschiedenen Autoren angegeben wurden, den Eindruck, dass die ungleichmäßige Fehlerabschätzung im interessantesten Fall δ=1 nicht möglich wäre, weil auf Grund der geführten Beweisschritte der Einfluss von δ auf L in der Form L=L(δ)=O(1/(1-δ)), δ->1-0, beobachtet wurde, wobei 2+δ, 0<δ<1, die Ordnung der vorausgesetzten Momente der betrachteten unabhängigen Zufallsgrößen X_k, k=1,2,...,n, angibt. Erneut wird die Methode der konjugierten Verteilungen angewendet und die gut bekannte Beweistechnik verbessert, um im interessantesten Fall δ=1 die Endlichkeit der absoluten Konstanten L nachzuweisen und um zu zeigen, dass L=L(1)=<127,74*7,31^(1/p), p>1, gilt. Im Fall 0<δ<1 wird nur die analytische Struktur von L herausgearbeitet, jedoch ohne numerische Berechnungen. Schließlich wird mit einem Beispiel zur Normalapproximation von Summen l_2-wertigen Zufallselementen die Anwendung der gewichteten Fehlerabschätzung im globalen zentralen Grenzwertsatz demonstriert
APA, Harvard, Vancouver, ISO, and other styles
28

Liu, Yufeng. "Multicategory psi-learning and support vector machine." Connect to this title online, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1085424065.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains x, 71 p.; also includes graphics Includes bibliographical references (p. 69-71). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
29

Herrmann, Felix J. "Recent developments in curvelet-based seismic processing." European Association of Geoscientists & Engineers, 2007. http://hdl.handle.net/2429/581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

El-Otmany, Hammou. "Approximation par la méthode NXFEM des problèmes d'interface et d'interphase dans la mécanique des fluides." Thesis, Pau, 2015. http://www.theses.fr/2015PAUU3024/document.

Full text
Abstract:
La modélisation et la simulation numérique des interfaces sont au coeur de nombreuses applications en mécanique des fluides et des solides, telles que la biologie cellulaire (déformation des globules rouges dans le sang), l'ingénierie pétrolière et la sismique (modélisation de réservoirs, présence de failles, propagation des ondes), l'aérospatiale (problème de rupture, de chocs) ou encore le génie civil. Cette thèse porte sur l'approximation des problèmes d'interface et d'interphase en mécanique des fluides par la méthode NXFEM, qui permet de prendre en compte de façon précise une discontinuité non alignée avec le maillage. Nous nous sommes d'abord intéressés au développement de la méthode NXFEM pour des éléments finis non-conformes pour prendre en compte une interface séparant deux milieux. Nous avons proposé deux approches pour les équations de Darcy et de Stokes. La première consiste à modifier les fonctions de base de Crouzeix-Raviart sur les cellules coupées et la deuxième consiste à rajouter des termes de stabilisation sur les arêtes coupées. Les résultats théoriques obtenus ont été ensuite validés numériquement. Par la suite, nous avons étudié la modélisation asymptotique et l'approximation numérique des problèmes d'interphase, faisant apparaître une couche mince. Nous avons considéré d'abord les équations de Darcy en présence d'une faille et, en passant à la limite dans la formulation faible, nous avons obtenu un modèle asymptotique où la faille est décrite par une interface, avec des conditions de transmission adéquates. Pour ce problème limite, nous avons développé une méthode numérique basée sur NXFEM avec éléments finis conformes, consistante et stable. Des tests numériques, incluant une comparaison avec la littérature, ont été réalisés. La modélisation asymptotique a été étendue aux équations de Stokes, pour lesquelles nous avons justifié le modèle limite obtenu. Enfin, nous nous sommes intéressés à la modélisation de la membrane d'un globule rouge par un fluide non-newtonien viscoélastique de Giesekus, afin d'appréhender la rhéologie du sang. Pour un problème d'interphase composé de deux fluides newtoniens (l'extérieur et l'intérieur du globule) et d'un liquide de Giesekus (la membrane du globule), nous avons dérivé formellement le problème limite, dans lequel les équations dans la membrane sont remplacées par des conditions de transmission sur une interface
Numerical modelling and simulation of interfaces in fluid and solid mechanics are at the heart of many applications, such as cell biology (deformation of red blood cells), petroleum engineering and seismic (reservoir modelling, presence of faults, wave propagation), aerospace and civil engineering etc. This thesis focuses on the approximation of interface and interphase problems in fluid mechanics by means of the NXFEM method, which takes into account discontinuities on non-aligned meshes.We have first focused on the development of NXFEM for nonconforming finite elements in order to take into account the interface between two media. Two approaches have been proposed, for Darcy and Stokes equations. The first approach consists in modifying the basis functions of Crouzeix-Raviart on the cut cells and the second approach consists in adding some stabilization terms on each part of a cut edge. We have studied them from a theoretical and a numerical point of view. Then we have studied the asymptotic modelling and numerical approximation of interphase problems, involving a thin layer between two media. We have first considered the Darcy equations in the presence of a highly permeable fracture. By passing to the limit in the weak formulation, we have obtained an asymptotic model where the 2D fracture is described by an interface with adequate transmission conditions. A numerical method based on NXFEM with conforming finite elements has been developed for this limit problem, and its consistency and uniform stability have been proved. Numerical tests including a comparison with the literature have been presented. The asymptotic modelling has been finally extended to Stokes equations, for which we have justified the limit problem. Finally, we have considered the mechanical behaviour of red blood cells in order to better understand blood rheology. The last part of the thesis is devoted to the modelling of the membrane of a red blood cell by a non-Newtonian viscoelastic liquid, described by the Giesekus model. For an interphase problem composed of two Newtonian fluids (the exterior and the interior of the red blood cell) and a Giesekus liquid (the membrane), we formally derived the limit problem where the equations in the membrane are replaced by transmission conditions on an interface
APA, Harvard, Vancouver, ISO, and other styles
31

Abudawia, Amel. "Analyse numérique d'une approximation élément fini pour un modèle d'intrusion saline dans les aquifères côtiers." Thesis, Littoral, 2015. http://www.theses.fr/2015DUNK0390/document.

Full text
Abstract:
Dans ce travail, nous étudions un schéma élément fini que nous appliquons à un modèle décrivant l'intrusion saline dans les aquifères côtiers confinés et libres. Le modèle est basé sur l'approche hydraulique qui consiste à moyenner verticalement le problème initial 3D, cette approximation repose sur une hypothèse d'écoulement quasi-hydrostatique qui, loin des épontes et des sources, est vérifiée. Pour modéliser les interfaces entre l'eau douce et l'eau salée (respectivement entre la zone saturée et la zone sèche), nous combinons l'approche 'interface nette' à l'approche avec 'interface diffuse' ; cette approche est déduite de la théorie de champ de phase, introduite par Allen-Cahn, pour décrire les phénomènes de transition entre deux zones. Compte tenu de ces approximations, le problème consiste en un système fortement couplé d'edps quasi-linéaires de type parabolique dans le cas des aquifères libres décrivant l'évolution des profondeurs des 2 surfaces libres et de type elliptique-prabolique dans le cas des aquifères confinés, les inconnues étant alors la profondeur de l'interface eau salée/eau douce et la charge hydraulique de l'eau douce. Dans la première partie de la thèse, nous donnons dans le cas d'un aquifère confiné, des résultats d'estimation d'erreur d'un schéma semi-implicite en temps combiné à une discrétisation en espace de type élément fini Pk Lagrange. Ce résultat utilise entre autre un résultat de régularité du gradient de la solution exacte dans l'espace Lr(ΩT), r > 2, ce qui permet de traiter la non-linéarité et d'établir l'estimation d'erreur sous des hypothèses de régularité raisonnables de la solution exacte. Dans la seconde partie de la thèse, nous généralisons l'étude précédente au cas de l'aquifère libre. La difficulté principale est liée à la complexité du système d'edps paraboliques mais à nouveau, grâce au résultat de régularité Lr(ΩT), r > 2 établi pour les gradients des surfaces libres, nous montrons que le schéma est d'ordre 1 en temps et k en espace pour des solutions suffisamment régulières. Nous concluons ce travail par des simulations numériques dans différents contextes (impact de la porosité et de la conductivité hydraulique sur l'évolution de l'interface, pompage et injection d'eau douce, effet des marées) validant ainsi le modèle et le schéma. Puis nous comparons les résultats à ceux obtenus avec un schéma volume fini construit à partir d'un maillage structuré
In this work, we study a finite element scheme we apply to a model describing saltwater intrusion into coastal aquifers confined and free. The model is based on the hydraulic approach of vertically averaging the 3D original problem, this approximation is based on a quasi-hydrostatic flow hypothesis which, instead of the walls and springs, is checked. To model the interface between freshwater and salt water (respectively between the saturated zone and dry zone), we combine the approach net interface (approach with the diffuse interface) ; This approach is derived from the phase field theory introduced by Allen-Cahn, to describe the phenomena of transition between two zones. Given these approximations, the problem consists of a strongly couple to edps parabolic quasi-linear system in the case of unconfined aquifers describing the evolution of the depths of two free surfaces and elliptical-parabolic type in the case confined aquifer, the unknowns being then the depth of salt water / fresh water and the hydraulic load of fresh water. In the first part of the thesis, we give in the case of a confined aquifer, error estimation results of a semi-implicit scheme in a combined time discretization space finite element type Pk Lagrange. This result among other uses a regularity result of the gradient of the exact solution in the space Lr(ΩT), r > 2, which can handle the non-linearity and to establish the error estimate under assumptions reasonable regularity of the exact solution. In the second part of the thesis, we generalize the previous study to the case of the free aquifer. The main difficulty is related to the complexity of the system of parabolic edps but again, thanks to regularity result Lr(ΩT), r > 2 gradients established for the free surfaces, we show that the scheme is of order 1 time and space k for sufficiently regular solutions. We conclude this work by numerical simulations in different contexts (impact of porosity and hydraulic conductivity of the evolution of the interface, and pumping fresh water injection, tidal effects) thus validating the model and diagram. The we compare the results with those obtained using a finite volume scheme constructed from a structured mesh
APA, Harvard, Vancouver, ISO, and other styles
32

Fu, Wenjun. "From the conventional MIMO to massive MIMO systems : performance analysis and energy efficiency optimization." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/25672.

Full text
Abstract:
The main topic of this thesis is based on multiple-input multiple-output (MIMO) wireless communications, which is a novel technology that has attracted great interest in the last twenty years. Conventional MIMO systems using up to eight antennas play a vital role in the urban cellular network, where the deployment of multiple antennas have significantly enhanced the throughput without taking extra spectrum or power resources. The massive MIMO systems “scales” up the benefits that offered by the conventional MIMO systems. Using sixty four or more antennas at the BS not only improves the spectrum efficiency significantly, but also provides additional link robustness. It is considered as a key technology in the fifth generation of mobile communication technology standards network, and the design of new algorithms for these two systems is the basis of the research in this thesis. Firstly, at the receiver side of the conventional MIMO systems, a general framework of bit error rate (BER) approximation for the detection algorithms is proposed, which aims to support an adaptive modulation scheme. The main idea is to utilize a simplified BER approximation scheme, which is based on the union bound of the maximum-likelihood detector (MLD), whereby the bit error rate (BER) performance of the detector for the varying channel qualities can be efficiently predicted. The K-best detector is utilized in the thesis because its quasi- MLD performance and the parallel computational structure. The simulation results have clearly shown the adaptive K-best algorithm, by applying the simplified approximation method, has much reduced computational complexity while still maintaining a promising BER performance. Secondly, in terms of the uplink channel estimation for the massive MIMO systems with the time-division-duplex operation, the performance of the Grassmannian line packing (GLP) based uplink pilot codebook design is investigated. It aims to eliminate the pilot contamination effect in order to increase the downlink achievable rate. In the case of a limited channel coherence interval, the uplink codebook design can be treated as a line packing problem in a Grassmannian manifold. The closed-form analytical expressions of downlink achievable rate for both the single-cell and multi-cell systems are proposed, which are intended for performance analysis and optimization. The numerical results validate the proposed analytical expressions and the rate gains by using the GLP-based uplink codebook design. Finally, the study is extended to the energy efficiency (EE) of the massive MIMO system, as the reduction carbon emissions from the information and communication technology is a long-term target for the researchers. An effective framework of maximizing the EE for the massive MIMO systems is proposed in this thesis. The optimization starts from the maximization of the minimum user rate, which is aiming to increase the quality-of-service and provide a feasible constraint for the EE maximization problem. Secondly, the EE problem is a non-concave problem and can not be solved directly, so the combination of fractional programming and the successive concave approximation based algorithm are proposed to find a good suboptimal solution. It has been shown that the proposed optimization algorithm provides a significant EE improvement compared to a baseline case.
APA, Harvard, Vancouver, ISO, and other styles
33

Sanchez, Mohamed Riad. "Application des techniques de bases réduites à la simulation des écoulements en milieux poreux." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLC079.

Full text
Abstract:
En géosciences, les applications associées au calage de modèles d'écoulement nécessitent d'appeler plusieurs fois un simulateur au cours d'un processus d'optimisation. Or, une seule simulation peut durer plusieurs heures et l'exécution d'une boucle complète de calage peut s'étendre sur plusieurs jours. Diminuer le temps de calcul global à l'aide des techniques de bases réduites (RB) constitue l’objectif de la thèse.Il s'agit plus précisément dans ce travail d'appliquer ces techniques aux écoulements incompressibles diphasiques eau-huile en milieu poreux. Ce modèle, bien que simplifié par rapport aux modèles utilisés dans l'industrie pétrolière, constitue déjà un défi du point de vue de la pertinence de la méthode RB du fait du couplage entre les différentes équations, de la forte hétérogénéité des données physiques, ainsi que du choix des schémas numériques de référence.Nous présentons d'abord le modèle considéré, le schéma volumes finis (VF) retenu pour l'approximation numérique, ainsi que différentes paramétrisations pertinentes en simulation de réservoir. Ensuite, après un bref rappel de la méthode RB, nous mettons en oeuvre la réduction du problème en pression à un instant donné en suivant deux démarches distinctes. La première consiste à interpréter la discrétisation VF comme une approximation de Ritz-Galerkine, ce qui permet de se ramener au cadre standard de la méthode RB mais n'est possible que sous certaines hypothèses restrictives. La seconde démarche lève ces restrictions en construisant le modèle réduit directement au niveau discret.Enfin, nous testons deux stratégies de réduction pour la collection en temps de pressions paramétrées par les variations de la saturation. La première considère le temps juste comme un paramètre supplémentaire. La seconde tente de mieux capturer la causalité temporelle en introduisant les trajectoires en temps paramétrées
In geosciences, applications involving model calibration require a simulator to be called several times with an optimization process. However, a single simulation can take several hours and a complete calibration loop can extend over serval days. The objective of this thesis is to reduce the overall simulation time using reduced basis (RB) techniques.More specifically, this work is devoted to applying such techniques to incompressible two-phase water-oil flows in porous media. Despite its relative simplicity in comparison to other models used in the petroleum industry, this model is already a challenge from the standpoint of reduced order modeling. This is due to the coupling between its equations, the highly heterogeneous physical data, as well as the choice of reference numerical schemes.We first present the two-phase flow model, along with the finite volume (FV) scheme used for the discretization and relevant parameterizations in reservoir simulation. Then, after having recalled the RB method, we perform a reduction of the pressure equation at a fixed time step by two different approaches. In the first approach, we interpret the FV discretization as a Ritz-Galerkine approximation, which takes us back to the standard RB framework but which is possible only under severe assumptions. The second approach frees us of these restrictions by building the RB method directly at the discrete level.Finally, we deploy two strategies for reducing the collection in time of pressuresparameterized by the variations of the saturation. The first one simply considers time as an additional parameter. The second one attempts to better capture temporalcausality by introducing parameterized time-trajectories
APA, Harvard, Vancouver, ISO, and other styles
34

Lynch, Kevin. "A Limit Theorem in Cryptography." Digital Commons @ East Tennessee State University, 2005. https://dc.etsu.edu/etd/1042.

Full text
Abstract:
Cryptography is the study of encryptying and decrypting messages and deciphering encrypted messages when the code is unknown. We consider Λπ(Δx, Δy) which is a count of how many ways a permutation satisfies a certain property. According to Hawkes and O'Connor, the distribution of Λπ(Δx, Δy) tends to a Poisson distribution with parameter ½ as m → ∞ for all Δx,Δy ∈ (Z/qZ)m - 0. We give a proof of this theorem using the Stein-Chen method: As qm approaches infinity, the distribution of Λπ(Δx, Δy) is approximately Poisson with parameter ½. Error bounds for this approximation are provided.
APA, Harvard, Vancouver, ISO, and other styles
35

Hafiene, Yosra. "Continuum limits of evolution and variational problems on graphs." Thesis, Normandie, 2018. http://www.theses.fr/2018NORMC254/document.

Full text
Abstract:
L’opérateur du p-Laplacien non local, l’équation d’évolution et la régularisation variationnelle associées régies par un noyau donné ont des applications dans divers domaines de la science et de l’ingénierie. En particulier, ils sont devenus des outils modernes pour le traitement massif des données (y compris les signaux, les images, la géométrie) et dans les tâches d’apprentissage automatique telles que la classification. En pratique, cependant, ces modèles sont implémentés sous forme discrète (en espace et en temps, ou en espace pour la régularisation variationnelle) comme approximation numérique d’un problème continu, où le noyau est remplacé par la matrice d’adjacence d’un graphe. Pourtant, peu de résultats sur la consistence de ces discrétisations sont disponibles. En particulier, il est largement ouvert de déterminer quand les solutions de l’équation d’évolution ou du problème variationnel des tâches basées sur des graphes convergent (dans un sens approprié) à mesure que le nombre de sommets augmente, vers un objet bien défini dans le domaine continu, et si oui, à quelle vitesse. Dans ce manuscrit, nous posons les bases pour aborder ces questions.En combinant des outils de la théorie des graphes, de l’analyse convexe, de la théorie des semi- groupes non linéaires et des équations d’évolution, nous interprétons rigoureusement la limite continue du problème d’évolution et du problème variationnel du p-Laplacien discrets sur graphes. Plus précisé- ment, nous considérons une suite de graphes (déterministes) convergeant vers un objet connu sous le nom de graphon. Si les problèmes d’évolution et variationnel associés au p-Laplacien continu non local sont discrétisés de manière appropriée sur cette suite de graphes, nous montrons que la suite des solutions des problèmes discrets converge vers la solution du problème continu régi par le graphon, lorsque le nombre de sommets tend vers l’infini. Ce faisant, nous fournissons des bornes d’erreur/consistance.Cela permet à son tour d’établir les taux de convergence pour différents modèles de graphes. En parti- culier, nous mettons en exergue le rôle de la géométrie/régularité des graphons. Pour les séquences de graphes aléatoires, en utilisant des inégalités de déviation (concentration), nous fournissons des taux de convergence nonasymptotiques en probabilité et présentons les différents régimes en fonction de p, de la régularité du graphon et des données initiales
The non-local p-Laplacian operator, the associated evolution equation and variational regularization, governed by a given kernel, have applications in various areas of science and engineering. In particular, they are modern tools for massive data processing (including signals, images, geometry), and machine learning tasks such as classification. In practice, however, these models are implemented in discrete form (in space and time, or in space for variational regularization) as a numerical approximation to a continuous problem, where the kernel is replaced by an adjacency matrix of a graph. Yet, few results on the consistency of these discretization are available. In particular it is largely open to determine when do the solutions of either the evolution equation or the variational problem of graph-based tasks converge (in an appropriate sense), as the number of vertices increases, to a well-defined object in the continuum setting, and if yes, at which rate. In this manuscript, we lay the foundations to address these questions.Combining tools from graph theory, convex analysis, nonlinear semigroup theory and evolution equa- tions, we give a rigorous interpretation to the continuous limit of the discrete nonlocal p-Laplacian evolution and variational problems on graphs. More specifically, we consider a sequence of (determin- istic) graphs converging to a so-called limit object known as the graphon. If the continuous p-Laplacian evolution and variational problems are properly discretized on this graph sequence, we prove that the solutions of the sequence of discrete problems converge to the solution of the continuous problem governed by the graphon, as the number of graph vertices grows to infinity. Along the way, we provide a consistency/error bounds. In turn, this allows to establish the convergence rates for different graph models. In particular, we highlight the role of the graphon geometry/regularity. For random graph se- quences, using sharp deviation inequalities, we deliver nonasymptotic convergence rates in probability and exhibit the different regimes depending on p, the regularity of the graphon and the initial data
APA, Harvard, Vancouver, ISO, and other styles
36

Tavares, Dina dos Santos. "Fractional calculus of variations." Doctoral thesis, Universidade de Aveiro, 2017. http://hdl.handle.net/10773/22184.

Full text
Abstract:
Doutoramento em Matemática e Aplicações
O cálculo de ordem não inteira, mais conhecido por cálculo fracionário, consiste numa generalização do cálculo integral e diferencial de ordem inteira. Esta tese é dedicada ao estudo de operadores fracionários com ordem variável e problemas variacionais específicos, envolvendo também operadores de ordem variável. Apresentamos uma nova ferramenta numérica para resolver equações diferenciais envolvendo derivadas de Caputo de ordem fracionária variável. Consideram- -se três operadores fracionários do tipo Caputo, e para cada um deles é apresentada uma aproximação dependendo apenas de derivadas de ordem inteira. São ainda apresentadas estimativas para os erros de cada aproximação. Além disso, consideramos alguns problemas variacionais, sujeitos ou não a uma ou mais restrições, onde o funcional depende da derivada combinada de Caputo de ordem fracionária variável. Em particular, obtemos condições de otimalidade necessárias de Euler–Lagrange e sendo o ponto terminal do integral, bem como o seu correspondente valor, livres, foram ainda obtidas as condições de transversalidade para o problema fracionário.
The calculus of non–integer order, usual known as fractional calculus, consists in a generalization of integral and differential integer-order calculus. This thesis is devoted to the study of fractional operators with variable order and specific variational problems involving also variable order operators. We present a new numerical tool to solve differential equations involving Caputo derivatives of fractional variable order. Three Caputo-type fractional operators are considered, and for each one of them, an approximation formula is obtained in terms of standard (integer-order) derivatives only. Estimations for the error of the approximations are also provided. Furthermore, we consider variational problems subject or not to one or more constraints, where the functional depends on a combined Caputo derivative of variable fractional order. In particular, we establish necessary optimality conditions of Euler–Lagrange. As the terminal point in the cost integral, as well the terminal state, are free, thus transversality conditions are obtained.
APA, Harvard, Vancouver, ISO, and other styles
37

Küther, Marc. "Error estimates for numerical approximations to scalar conservation laws." [S.l. : s.n.], 2001. http://www.freidok.uni-freiburg.de/volltexte/337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Dvořáček, Petr. "Evoluční návrh pro aproximaci obvodů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-234958.

Full text
Abstract:
In recent years, there has been a strong need for the design of integrated  circuits showing low power consumption. It is possible to create intentionally approximate circuits which don't fully implement the specified logic behaviour, but exhibit improvements in term of area, delay and power consumption. These circuits can be used in many error resilient applications, especially in signal and image processing, computer graphics, computer vision and machine learning. This work describes an evolutionary approach to approximate design of arithmetic circuits and other more complex systems. This text presents a parallel calculation of a fitness function. The proposed method accelerated evaluation of 8-bit approximate multiplier 170 times in comparison with the common version. Evolved approximate circuits were used in different types of edge detectors.
APA, Harvard, Vancouver, ISO, and other styles
39

Lei, Lei. "Markov Approximations: The Characterization of Undermodeling Errors." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1371.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Ghazali, Saadia. "The global error in weak approximations of stochastic differential equations." Thesis, Imperial College London, 2007. http://hdl.handle.net/10044/1/1260.

Full text
Abstract:
In this thesis, the convergence analysis of a class of weak approximations of solutions of stochastic differential equations is presented. This class includes recent approximations such as Kusuoka’s moment similar families method and the Lyons-Victoir cubature of Wiener Space approach. It is shown that the rate of convergence depends intrinsically on the smoothness of the chosen test function. For smooth functions (the required degree of smoothness depends on the order of the approximation), an equidistant partition of the time interval on which the approximation is sought is optimal. For functions that are less smooth, for example Lipschitz functions, the rate of convergence decays and the optimal partition is no longer equidistant. An asymptotic rate of convergence is also established for the Lyons-Victoir method. The analysis rests upon Kusuoka- Stroock’s results on the smoothness of the distribution of the solution of a stochastic differential equation. Finally the results are applied to the numerical solution of the filtering problem and the pricing of asian options.
APA, Harvard, Vancouver, ISO, and other styles
41

Joldes, Mioara Maria. "Approximations polynomiales rigoureuses et applications." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2011. http://tel.archives-ouvertes.fr/tel-00657843.

Full text
Abstract:
Quand on veut évaluer ou manipuler une fonction mathématique f, il est fréquent de la remplacer par une approximation polynomiale p. On le fait, par exemple, pour implanter des fonctions élémentaires en machine, pour la quadrature ou la résolution d'équations différentielles ordinaires (ODE). De nombreuses méthodes numériques existent pour l'ensemble de ces questions et nous nous proposons de les aborder dans le cadre du calcul rigoureux, au sein duquel on exige des garanties sur la précision des résultats, tant pour l'erreur de méthode que l'erreur d'arrondi.Une approximation polynomiale rigoureuse (RPA) pour une fonction f définie sur un intervalle [a,b], est un couple (P, Delta) formé par un polynôme P et un intervalle Delta, tel que f(x)-P(x) appartienne à Delta pour tout x dans [a,b].Dans ce travail, nous analysons et introduisons plusieurs procédés de calcul de RPAs dans le cas de fonctions univariées. Nous analysons et raffinons une approche existante à base de développements de Taylor.Puis nous les remplaçons par des approximants plus fins, tels que les polynômes minimax, les séries tronquées de Chebyshev ou les interpolants de Chebyshev.Nous présentons aussi plusieurs applications: une relative à l'implantation de fonctions standard dans une bibliothèque mathématique (libm), une portant sur le calcul de développements tronqués en séries de Chebyshev de solutions d'ODE linéaires à coefficients polynômiaux et, enfin, un processus automatique d'évaluation de fonction à précision garantie sur une puce reconfigurable.
APA, Harvard, Vancouver, ISO, and other styles
42

Weyerman, Whitney Samuel. "Approximations with Improving Error Bounds for Makespan Minimization in Batch Manufacturing." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2300.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Pettersson, Klas. "Error estimates for finite element approximations of effective elastic properties of periodic structures." Thesis, Uppsala University, Division of Scientific Computing, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-125632.

Full text
Abstract:

Techniques for a posteriori error estimation for finite element approximations of an elliptic partial differential equation are studied.This extends previous work on localized error control in finite element methods for linear elasticity.The methods are then applied to the problem of homogenization of periodic structures. In particular, error estimates for the effective elastic properties are obtained. The usefulness of these estimates is twofold.First, adaptive methods using mesh refinements based on the estimates can be constructed.Secondly, one of the estimates can give reasonable measure of the magnitude ofthe error. Numerical examples of this are given.

APA, Harvard, Vancouver, ISO, and other styles
44

Carlsson, Jesper. "Pontryagin approximations for optimal design." Licentiate thesis, Stockholm, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Seeger, Matthias. "Bayesian Gaussian process models : PAC-Bayesian generalisation error bounds and sparse approximations." Thesis, University of Edinburgh, 2003. http://hdl.handle.net/1842/321.

Full text
Abstract:
Non-parametric models and techniques enjoy a growing popularity in the field of machine learning, and among these Bayesian inference for Gaussian process (GP) models has recently received significant attention. We feel that GP priors should be part of the standard toolbox for constructing models relevant to machine learning in the same way as parametric linear models are, and the results in this thesis help to remove some obstacles on the way towards this goal. In the first main chapter, we provide a distribution-free finite sample bound on the difference between generalisation and empirical (training) error for GP classification methods. While the general theorem (the PAC-Bayesian bound) is not new, we give a much simplified and somewhat generalised derivation and point out the underlying core technique (convex duality) explicitly. Furthermore, the application to GP models is novel (to our knowledge). A central feature of this bound is that its quality depends crucially on task knowledge being encoded faithfully in the model and prior distributions, so there is a mutual benefit between a sharp theoretical guarantee and empirically well-established statistical practices. Extensive simulations on real-world classification tasks indicate an impressive tightness of the bound, in spite of the fact that many previous bounds for related kernel machines fail to give non-trivial guarantees in this practically relevant regime. In the second main chapter, sparse approximations are developed to address the problem of the unfavourable scaling of most GP techniques with large training sets. Due to its high importance in practice, this problem has received a lot of attention recently. We demonstrate the tractability and usefulness of simple greedy forward selection with information-theoretic criteria previously used in active learning (or sequential design) and develop generic schemes for automatic model selection with many (hyper)parameters. We suggest two new generic schemes and evaluate some of their variants on large real-world classification and regression tasks. These schemes and their underlying principles (which are clearly stated and analysed) can be applied to obtain sparse approximations for a wide regime of GP models far beyond the special cases we studied here.
APA, Harvard, Vancouver, ISO, and other styles
46

Hansen, Peder. "Approximating the Binomial Distribution by the Normal Distribution – Error and Accuracy." Thesis, Uppsala universitet, Matematisk statistik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-155336.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Resmerita, Diana. "Compression pour l'apprentissage en profondeur." Thesis, Université Côte d'Azur, 2022. http://www.theses.fr/2022COAZ4043.

Full text
Abstract:
Les voitures autonomes sont des applications complexes qui nécessitent des machines puissantes pour pouvoir fonctionner correctement. Des tâches telles que rester entre les lignes blanches, lire les panneaux ou éviter les obstacles sont résolues en utilisant plusieurs réseaux neuronaux convolutifs (CNN) pour classer ou détecter les objets. Il est très important que tous les réseaux fonctionnent en parallèle afin de transmettre toutes les informations nécessaires et de prendre une décision commune. Aujourd'hui, à force de s'améliorer, les réseaux sont devenus plus gros et plus coûteux en termes de calcul. Le déploiement d'un seul réseau devient un défi. La compression des réseaux peut résoudre ce problème. Par conséquent, le premier objectif de cette thèse est de trouver des méthodes de compression profonde afin de faire face aux limitations de mémoire et de puissance de calcul présentes sur les systèmes embarqués. Les méthodes de compression doivent être adaptées à un processeur spécifique, le MPPA de Kalray, pour des implémentations à court terme. Nos contributions se concentrent principalement sur la compression du réseau après l'entraînement pour le stockage, ce qui signifie compresser des paramètres du réseau sans réentraîner ou changer l'architecture originale et le type de calculs. Dans le contexte de notre travail, nous avons décidé de nous concentrer sur la quantification. Notre première contribution consiste à comparer les performances de la quantification uniforme et de la quantification non-uniforme, afin d'identifier laquelle des deux présente un meilleur compromis taux-distorsion et pourrait être rapidement prise en charge par l'entreprise. L'intérêt de l'entreprise est également orienté vers la recherche de nouvelles méthodes innovantes pour les futures générations de MPPA. Par conséquent, notre deuxième contribution se concentre sur la comparaison des représentations en virgule flottante (FP32, FP16) aux représentations arithmétiques alternatives telles que BFloat16, msfp8, Posit8. Les résultats de cette analyse étaient en faveur de Posit8. Ceci a motivé la société Kalray à concevoir un décompresseur de FP16 vers Posit8. Enfin, de nombreuses méthodes de compression existent déjà, nous avons décidé de passer à un sujet adjacent qui vise à quantifier théoriquement les effets de l'erreur de quantification sur la précision du réseau. Il s'agit du deuxième objectif de la thèse. Nous remarquons que les mesures de distorsion bien connues ne sont pas adaptées pour prédire la dégradation de la précision dans le cas de l'inférence pour les réseaux de neurones compressés. Nous définissons une nouvelle mesure de distorsion avec une expression analytique qui s’apparente à un rapport signal/bruit. Un ensemble d'expériences a été réalisé en utilisant des données simulées et de petits réseaux qui montrent le potentiel de cette mesure de distorsion
Autonomous cars are complex applications that need powerful hardware machines to be able to function properly. Tasks such as staying between the white lines, reading signs, or avoiding obstacles are solved by using convolutional neural networks (CNNs) to classify or detect objects. It is highly important that all the networks work in parallel in order to transmit all the necessary information and take a common decision. Nowadays, as the networks improve, they also have become bigger and more computational expensive. Deploying even one network becomes challenging. Compressing the networks can solve this issue. Therefore, the first objective of this thesis is to find deep compression methods in order to cope with the memory and computational power limitations present on embedded systems. The compression methods need to be adapted to a specific processor, Kalray's MPPA, for short term implementations. Our contributions mainly focus on compressing the network post-training for storage purposes, which means compressing the parameters of the network without retraining or changing the original architecture and the type of the computations. In the context of our work, we decided to focus on quantization. Our first contribution consists in comparing the performances of uniform quantization and non-uniform quantization, in order to identify which of the two has a better rate-distortion trade-off and could be quickly supported in the company. The company's interest is also directed towards finding new innovative methods for future MPPA generations. Therefore, our second contribution focuses on comparing standard floating-point representations (FP32, FP16) to recently proposed alternative arithmetical representations such as BFloat16, msfp8, Posit8. The results of this analysis were in favor for Posit8. This motivated the company Kalray to conceive a decompressor from FP16 to Posit8. Finally, since many compression methods already exist, we decided to move to an adjacent topic which aims to quantify theoretically the effects of quantization error on the network's accuracy. This is the second objective of the thesis. We notice that well-known distortion measures are not adapted to predict accuracy degradation in the case of inference for compressed neural networks. We define a new distortion measure with a closed form which looks like a signal-to-noise ratio. A set of experiments were done using simulated data and small networks, which show the potential of this distortion measure
APA, Harvard, Vancouver, ISO, and other styles
48

Kelly, Jodie. "Topics in the statistical analysis of positive and survival data." Thesis, Queensland University of Technology, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
49

Al-Mohy, Awad. "Algorithms for the matrix exponential and its Fréchet derivative." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/algorithms-for-the-matrix-exponential-and-its-frechet-derivative(4de9bdbd-6d79-4e43-814a-197668694b8e).html.

Full text
Abstract:
New algorithms for the matrix exponential and its Fréchet derivative are presented. First, we derive a new scaling and squaring algorithm (denoted expm[new]) for computing eA, where A is any square matrix, that mitigates the overscaling problem. The algorithm is built on the algorithm of Higham [SIAM J.Matrix Anal. Appl., 26(4): 1179-1193, 2005] but improves on it by two key features. The first, specific to triangular matrices, is to compute the diagonal elements in the squaring phase as exponentials instead of powering them. The second is to base the backward error analysis that underlies the algorithm on members of the sequence {||Ak||1/k} instead of ||A||. The terms ||Ak||1/k are estimated without computing powers of A by using a matrix 1-norm estimator. Second, a new algorithm is developed for computing the action of the matrix exponential on a matrix, etAB, where A is an n x n matrix and B is n x n₀ with n₀ << n. The algorithm works for any A, its computational cost is dominated by the formation of products of A with n x n₀ matrices, and the only input parameter is a backward error tolerance. The algorithm can return a single matrix etAB or a sequence etkAB on an equally spaced grid of points tk. It uses the scaling part of the scaling and squaring method together with a truncated Taylor series approximation to the exponential. It determines the amount of scaling and the Taylor degree using the strategy of expm[new].Preprocessing steps are used to reduce the cost of the algorithm. An important application of the algorithm is to exponential integrators for ordinary differential equations. It is shown that the sums of the form $\sum_{k=0}^p\varphi_k(A)u_k$ that arise in exponential integrators, where the $\varphi_k$ are related to the exponential function, can be expressed in terms of a single exponential of a matrix of dimension $n+p$ built by augmenting $A$ with additional rows and columns. Third, a general framework for simultaneously computing a matrix function, $f(A)$, and its Fréchet derivative in the direction $E$, $L_f(A,E)$, is established for a wide range of matrix functions. In particular, we extend the algorithm of Higham and $\mathrm{expm_{new}}$ to two algorithms that intertwine the evaluation of both $e^A$ and $L(A,E)$ at a cost about three times that for computing $e^A$ alone. These two extended algorithms are then adapted to algorithms that simultaneously calculate $e^A$ together with an estimate of its condition number. Finally, we show that $L_f(A,E)$, where $f$ is a real-valued matrix function and $A$ and $E$ are real matrices, can be approximated by $\Im f(A+ihE)/h$ for some suitably small $h$. This approximation generalizes the complex step approximation known in the scalar case, and is proved to be of second order in $h$ for analytic functions $f$ and also for the matrix sign function. It is shown that it does not suffer the inherent cancellation that limits the accuracy of finite difference approximations in floating point arithmetic. However, cancellation does nevertheless vitiate the approximation when the underlying method for evaluating $f$ employs complex arithmetic. The complex step approximation is attractive when specialized methods for evaluating the Fréchet derivative are not available.
APA, Harvard, Vancouver, ISO, and other styles
50

Mirbagheri, Arash. "Linear MMSE receivers for interference suppression & multipath diversity combining in long-code DS-CDMA systems." Thesis, Waterloo, Ont. : University of Waterloo, 2003. http://etd.uwaterloo.ca/etd/amirbagh2003.pdf.

Full text
Abstract:
Thesis (Ph.D)--University of Waterloo, 2003.
"A thesis presented to the University of Waterloo in fulfilment of the thesis requirement for the degree of Doctor of Philosophy in Electrical and Computer Engineering". Includes bibliographical references. Also available in microfiche format.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography