Academic literature on the topic 'Rounding error analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Rounding error analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Rounding error analysis"

1

Connolly, Michael P., and Nicholas J. Higham. "Probabilistic Rounding Error Analysis of Householder QR Factorization." SIAM Journal on Matrix Analysis and Applications 44, no. 3 (July 28, 2023): 1146–63. http://dx.doi.org/10.1137/22m1514817.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kolomys, Olena, and Liliya Luts. "Algorithm for Calculating Primary Spectral Density Estimates Using FFT and Analysis of its Accuracy." Cybernetics and Computer Technologies, no. 2 (September 30, 2022): 52–57. http://dx.doi.org/10.34229/2707-451x.22.2.5.

Full text
Abstract:
Introduction. Fast algorithms for solving problems of spectral and correlation analysis of random processes began to appear mainly after 1965, when the algorithm of fast Fourier transform (FFT) entered computational practice. With its appearance, a number of computational algorithms for the accelerated solution of some problems of digital signal processing were developed, speed-efficient algorithms for calculating such estimates of probabilistic characteristics of control objects as estimates of convolutions, correlation functions, spectral densities of stationary and some types of non-stationary random processes were built. The purpose of the article is to study a speed-efficient algorithm for calculating the primary estimate of the spectral density of stationary ergodic random processes with zero mean. Most often, the direct Fourier transform method using the FFT algorithm, is used to calculate it. The article continues the research and substantiation of this method in the direction of obtaining better estimates of rounding errors. Results. The research and substantiation of the method in the direction of obtaining more qualitative estimates of rounding errors, taking into account the errors of the input information specification, has been continued. The main characteristics of the given algorithm for calculating the primary estimate of the spectral density are accuracy and computational complexity. The main attention is paid to obtaining error estimates accompanying the process of calculating the primary estimate of the spectral density. The estimates of the rounding error and ineradicable error of the given algorithm for calculating the primary estimate of the spectral density, which appear during the implementation of the algorithm for the classical rounding rule for calculation in floating-point mode with τ digits in the mantissa of the number, taking into account the input error, are obtained. Conclusions. The obtained results make it possible to diagnose the quality of the solution to the problem of calculating the primary estimate of the spectral density of stationary ergodic random processes with a zero mean value by the described method and to choose the parameters of the algorithm that will ensure the required accuracy of the approximate solution of the problem. Keywords: primary estimation of spectral density, fast Fourier transform, discrete Fourier transform, rounding error, input error.
APA, Harvard, Vancouver, ISO, and other styles
3

Connolly, Michael P., Nicholas J. Higham, and Theo Mary. "Stochastic Rounding and Its Probabilistic Backward Error Analysis." SIAM Journal on Scientific Computing 43, no. 1 (January 2021): A566—A585. http://dx.doi.org/10.1137/20m1334796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cuyt, Annie, and Paul Van der Cruyssen. "Rounding error analysis for forward continued fraction algorithms." Computers & Mathematics with Applications 11, no. 6 (June 1985): 541–64. http://dx.doi.org/10.1016/0898-1221(85)90037-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Higham, Nicholas J., and Theo Mary. "A New Approach to Probabilistic Rounding Error Analysis." SIAM Journal on Scientific Computing 41, no. 5 (January 2019): A2815—A2835. http://dx.doi.org/10.1137/18m1226312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zou, Qinmeng. "Probabilistic Rounding Error Analysis of Modified Gram–Schmidt." SIAM Journal on Matrix Analysis and Applications 45, no. 2 (May 21, 2024): 1076–88. http://dx.doi.org/10.1137/23m1585817.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mezzarobba, Marc. "Rounding error analysis of linear recurrences using generating series." ETNA - Electronic Transactions on Numerical Analysis 58 (2023): 196–227. http://dx.doi.org/10.1553/etna_vol58s196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kiełbasiński, Andrzej. "A note on rounding-error analysis of Cholesky factorization." Linear Algebra and its Applications 88-89 (April 1987): 487–94. http://dx.doi.org/10.1016/0024-3795(87)90121-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Journal, Baghdad Science. "A Note on the Perturbation of arithmetic expressions." Baghdad Science Journal 13, no. 1 (March 6, 2016): 190–97. http://dx.doi.org/10.21123/bsj.13.1.190-197.

Full text
Abstract:
In this paper we present the theoretical foundation of forward error analysis of numerical algorithms under;• Approximations in "built-in" functions.• Rounding errors in arithmetic floating-point operations.• Perturbations of data.The error analysis is based on linearization method. The fundamental tools of the forward error analysis are system of linear absolute and relative a prior and a posteriori error equations and associated condition numbers constituting optimal of possible cumulative round – off errors. The condition numbers enable simple general, quantitative bounds definitions of numerical stability. The theoretical results have been applied a Gaussian elimination, and have proved to be very effective means of both a priori and a posteriori error analysis.
APA, Harvard, Vancouver, ISO, and other styles
10

Rudikov, D. A., and A. S. Ilinykh. "Error analysis of the cutting machine step adjustable drive." Journal of Physics: Conference Series 2131, no. 2 (December 1, 2021): 022046. http://dx.doi.org/10.1088/1742-6596/2131/2/022046.

Full text
Abstract:
Abstract The implementation precision of a number of adjustment bodies of a metal-cutting machine is also the most important indicator of its quality, a strictly standardized industry standard, technical conditions for manufacturing and acceptance. Moreover, the standard for limiting the error is set depending on the used denominator of the series. An essential feature of the precision of the series being implemented is that it is determined not by an error in parts’ manufacturing, but by the disadvantages of the used method of kinematic calculation. The established modes largely determine the efficiency of processing on metal-cutting machines. If the setting is set to an underestimated mode, then the performance is reduced accordingly. In the case of the mode overestimation, this leads to a decrease in durability and losses due to increased regrinding and tool changes. Creation of a complex of mathematical models for the design kinematic calculation of the metal-cutting machines’ main movement drive, which allows reducing the error in the implementation of a series of preferred numbers and increasing machining precision. The article provides a mathematical complex for analyzing the total error components, which allows determining and evaluating the total error of the drive of a metal-cutting machine by analyzing its constituent values with high precision: errors of a permanent part, errors of a multiplier part, rounding errors of standard numbers, errors in the electric motor and belt transmission. The presented complex helps to identify the role of the rounding error of preferred numbers in the total relative error formation and makes it possible to reduce it, which allows solving the problem of increasing the step adjustable drive precision. When using a mathematical complex, a fundamentally new opportunity for creating a scientific base appears, developing algorithms and programs for engineering calculation of tables that facilitate the selection of the numbers of teeth for multiple groups, structures and guaranteeing high precision of the implemented series.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Rounding error analysis"

1

Plet, Antoine. "Contribution to error analysis of algorithms in floating-point arithmetic." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEN038/document.

Full text
Abstract:
L’arithmétique virgule flottante est une approximation de l’arithmétique réelle dans laquelle chaque opération peut introduire une erreur. La norme IEEE 754 requiert que les opérations élémentaires soient aussi précises que possible, mais au cours d’un calcul, les erreurs d’arrondi s’accumulent et peuvent conduire à des résultats totalement faussés. Cela arrive avec une expression aussi simple que ab + cd, pour laquelle l’algorithme naïf retourne parfois un résultat aberrant, avec une erreur relative largement supérieure à 1. Il est donc important d’analyser les algorithmes utilisés pour contrôler l’erreur commise. Je m’intéresse à l’analyse de briques élémentaires du calcul en cherchant des bornes fines sur l’erreur relative. Pour des algorithmes suffisamment précis, en arithmétique de base β et de précision p, on arrive en général à prouver une borne sur l'erreur de la forme α·u + o(u²) où α > 0 et u = 1/2·β1-p est l'unité d'arrondi. Comme indication de la finesse d'une telle borne, on peut fournir des exemples numériques pour les précisions standards qui approchent cette borne, ou bien un exemple paramétré par la précision qui génère une erreur de la forme α·u + o(u²), prouvant ainsi l'optimalité asymptotique de la borne. J’ai travaillé sur la formalisation d’une arithmétique à virgule flottante symbolique, sur des nombres paramétrés par la précision, et à son implantation dans le logiciel de calcul formel Maple. J’ai aussi obtenu une borne d'erreur très fine pour un algorithme d’inversion complexe en arithmétique flottante. Ce résultat suggère le calcul d'une division décrit par la formule x/y = (1/y)·x, par opposition à x/y = (x·y)/|y|². Quel que soit l'algorithme utilisé pour effectuer la multiplication, nous avons une borne d'erreur plus petite pour les algorithmes décrits par la première formule. Ces travaux sont réalisés avec mes directeurs de thèse, en collaboration avec Claude-Pierre Jeannerod (CR Inria dans AriC, au LIP)
Floating-point arithmetic is an approximation of real arithmetic in which each operation may introduce a rounding error. The IEEE 754 standard requires elementary operations to be as accurate as possible. However, through a computation, rounding errors may accumulate and lead to totally wrong results. It happens for example with an expression as simple as ab + cd for which the naive algorithm sometimes returns a result with a relative error larger than 1. Thus, it is important to analyze algorithms in floating-point arithmetic to understand as thoroughly as possible the generated error. In this thesis, we are interested in the analysis of small building blocks of numerical computing, for which we look for sharp error bounds on the relative error. For this kind of building blocks, in base and precision p, we often successfully prove error bounds of the form α·u + o(u²) where α > 0 and u = 1/2·β1-p is the unit roundoff. To characterize the sharpness of such a bound, one can provide numerical examples for the standard precisions that are close to the bound, or examples that are parametrized by the precision and generate an error of the same form α·u + o(u²), thus proving the asymptotic optimality of the bound. However, the paper and pencil checking of such parametrized examples is a tedious and error-prone task. We worked on the formalization of a symbolicfloating-point arithmetic, over numbers that are parametrized by the precision, and implemented it as a library in the Maple computer algebra system. We also worked on the error analysis of the basic operations for complex numbers in floating-point arithmetic. We proved a very sharp error bound for an algorithm for the inversion of a complex number in floating-point arithmetic. This result suggests that the computation of a complex division according to x/y = (1/y)·x may be preferred, instead of the more classical formula x/y = (x·y)/|y|². Indeed, for any complex multiplication algorithm, the error bound is smaller with the algorithms described by the “inverse and multiply” approach.This is a joint work with my PhD advisors, with the collaboration of Claude-Pierre Jeannerod (CR Inria in AriC, at LIP)
APA, Harvard, Vancouver, ISO, and other styles
2

Chesneaux, Jean-Marie. "Etude theorique et implementation en ada de la methode cestac." Paris 6, 1988. http://www.theses.fr/1988PA066143.

Full text
Abstract:
L'utilisation des ordinateurs dans le calcul scientifique a toujours pose le probleme de la precision des resultats obtenus du fait de la propagation des erreurs d'arrondi. La methode probabiliste cestac, fondee sur une perturbation aleatoire du dernier bit de la mantisse des resultats intermediaires, a toujours donne d'excellents resultats. Dans une premiere partie, une modelisation de cestac permet de justifier sa validite tant pour l'emploi du test de student pour estimer la precision des calculs que pour l'etude du biais entre la moyenne de distribution et le resultat mathematique exact. Dans la deuxieme partie, est presente un environnement complet en langage ada pour l'utilisation pratique de la methode fondee sur les notions de surcharge, de paquetage de parametres generiques
APA, Harvard, Vancouver, ISO, and other styles
3

Gerest, Matthieu. "Using Block Low-Rank compression in mixed precision for sparse direct linear solvers." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS447.

Full text
Abstract:
Pour résoudre des systèmes linéaires creux de grande taille, on peut vouloir utiliser des méthodes directes, numériquement robustes, mais coûteuses en termes d'utilisation de la mémoire et de temps de résolution. C'est le cas de la méthode multifrontale, notamment implémentée par le solveur MUMPS. L’une des fonctionnalités disponibles dans ce solveur est l’utilisation de la compression Block Low-Rank (BLR), qui améliore les performances. L'objectif de cette thèse est d'explorer plusieurs pistes d'amélioration de cette compression BLR, de façon à améliorer les performances de la méthode multifrontale. En particulier, nous proposons une variante de la compression BLR utilisant simultanément plusieurs formats de nombres à virgule flottante (précision mixte). Notre démarche, basée sur une analyse d'erreur, permet dans un premier temps de réduire la complexité d'une factorisation LU de matrice dense, sans pour autant impacter l'erreur commise de façon significative. Dans un second temps, nous adaptons ces algorithmes à la méthode multifrontale. Une première implémentation utilise notre compression BLR en précision mixte comme format de stockage, et permet ainsi de réduire la consommation mémoire de MUMPS. Une seconde implémentation permet de combiner ces gains en mémoire avec des gains en temps lors de la phase de résolution de systèmes triangulaires, grâce à des calculs effectués en précision faible. Cependant, nous remarquons que cette étape n'est pas aussi performante que prévu en BLR, dans le cas d'un système linéaire à plusieurs seconds membres. Pour y remédier, nous proposons de nouvelles variantes BLR de la résolution de systèmes triangulaires, dans laquelle la localité mémoire a été améliorée. Nous justifions l'intérêt de cette approche grâce à une analyse de volume de communication. Nous implémentons nos algorithmes dans un prototype simplifié, puis dans MUMPS, et nous obtenons des gains en temps dans les deux cas
In order to solve large sparse linear systems, one may want to use a direct method, numerically robust but rather costly, both in terms of memory consumption and computation time. The multifrontal method belong to this class algorithms, and one of its high-performance parallel implementation is the solver MUMPS. One of the functionalities of MUMPS is the use of Block Low-Rank (BLR) matrix compression, that improves its performance. In this thesis, we present several new techniques aiming at further improving the performance of dense and sparse direct solvers, on top of using a BLR compression. In particular, we propose a new variant of BLR compression in which several floating-point formats are used simultaneously (mixed precision). Our approach is based on an error analysis, and it first allows to reduce the estimated cost of a LU factorization of a dense matrix, without having a significant impact on the error. Second, we adapt these algorithms to the multifrontal method. A first implementation uses our mixed-precision BLR compression as a storage format only, thus allowing to reduce the memory footprint of MUMPS. A second implementation allows to combine these memory gains with time reductions in the triangular solution phase, by switching computations to low precision. However, we notice performance issues related to BLR for this phase, in case the system has many right-hand sides. Therefore, we propose new BLR variants of triangular solution that improve the data locality and reduce data movements, as highlighted by a communication volume analysis. We implement our algorithms within a simplified prototype and within solver MUMPS. In both cases, we obtain time gains
APA, Harvard, Vancouver, ISO, and other styles
4

Damouche, Nasrine. "Improving the Numerical Accuracy of Floating-Point Programs with Automatic Code Transformation Methods." Thesis, Perpignan, 2016. http://www.theses.fr/2016PERP0032/document.

Full text
Abstract:
Les systèmes critiques basés sur l’arithmétique flottante exigent un processus rigoureux de vérification et de validation pour augmenter notre confiance en leur sureté et leur fiabilité. Malheureusement, les techniques existentes fournissent souvent une surestimation d’erreurs d’arrondi. Nous citons Arian 5 et le missile Patriot comme fameux exemples de désastres causés par les erreurs de calculs. Ces dernières années, plusieurs techniques concernant la transformation d’expressions arithmétiques pour améliorer la précision numérique ont été proposées. Dans ce travail, nous allons une étape plus loin en transformant automatiquement non seulement des expressions arithmétiques mais des programmes complets contenant des affectations, des structures de contrôle et des fonctions. Nous définissons un ensemble de règles de transformation permettant la génération, sous certaines conditions et en un temps polynômial, des expressions pluslarges en appliquant des calculs formels limités, au sein de plusieurs itérations d’une boucle. Par la suite, ces larges expressions sont re-parenthésées pour trouver la meilleure expression améliorant ainsi la précision numérique des calculs de programmes. Notre approche se base sur les techniques d’analyse statique par interprétation abstraite pour sur-rapprocher les erreurs d’arrondi dans les programmes et au moment de la transformation des expressions. Cette approche est implémenté dans notre outil et des résultats expérimentaux sur des algorithmes numériques classiques et des programmes venant du monde d’embarqués sont présentés
Critical software based on floating-point arithmetic requires rigorous verification and validation process to improve our confidence in their reliability and their safety. Unfortunately available techniques for this task often provide overestimates of the round-off errors. We can cite Arian 5, Patriot rocket as well-known examples of disasters. These last years, several techniques have been proposed concerning the transformation of arithmetic expressions in order to improve their numerical accuracy and, in this work, we go one step further by automatically transforming larger pieces of code containing assignments, control structures and functions. We define a set of transformation rules allowing the generation, under certain conditions and in polynomial time, of larger expressions by performing limited formal computations, possibly among several iterations of a loop. These larger expressions are better suited to improve, by re-parsing, the numerical accuracy of the program results. We use abstract interpretation based static analysis techniques to over-approximate the round-off errors in programs and during the transformation of expressions. A tool has been implemented and experimental results are presented concerning classical numerical algorithms and algorithms for embedded systems
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Rounding error analysis"

1

Wilkinson, J. H. Rounding errors in algebraic processes. New York: Dover, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lemeshko, Boris. Nonparametric consent criteria. ru: INFRA-M Academic Publishing LLC., 2023. http://dx.doi.org/10.12737/2058731.

Full text
Abstract:
The monograph discusses the application of nonparametric criteria of agreement (Kolmogorov, Cooper, Kramer-Mises -Smirnov, Watson, Anderson -Darling, Zhang) when testing simple and complex hypotheses. The appendix contains tables containing percentage points and statistical distribution models necessary for the correct use of criteria when testing simple and, most importantly, various complex hypotheses. In comparison with the first edition, more attention is paid to the application of criteria in non-standard application conditions, in particular for the analysis of large samples. It is shown that in applications, the properties of criteria can change significantly due to the presence of rounding errors, and this must be taken into account when forming statistical conclusions. Following the recommendations in data analysis will ensure the correctness of statistical conclusions and increase their validity. It is designed for specialists who, in one way or another, face issues of statistical data analysis, processing of experimental results, the use of statistical methods to analyze various aspects and trends of the surrounding reality in their activities. It will be useful for engineers, researchers, specialists of various profiles (physicians, biologists, sociologists, economists, etc.), university teachers, graduate students and students.
APA, Harvard, Vancouver, ISO, and other styles
3

Lemeshko, Boris, Aleksandr Popov, and Vadim Seleznev. Criteria for checking the deviation of the distribution from the normal law. Application Guide. ru: INFRA-M Academic Publishing LLC., 2022. http://dx.doi.org/10.12737/1896110.

Full text
Abstract:
The monograph discusses the application of statistical criteria aimed at testing the hypothesis that the analyzed data belongs to the normal law of probability distribution. Special criteria, nonparametric criteria of agreement and criteria of type χ2 are considered and compared. The disadvantages and advantages of various criteria are indicated. Tables containing percentage points and statistical distribution models necessary for the correct application of criteria are given. In comparison with the first edition, the set of considered special criteria of normality has been significantly expanded. The entire set of criteria is ranked by power relative to a number of closely competing hypotheses, which facilitates the selection of the most preferred criteria. It is shown that in applications, the properties of criteria can change significantly due to the presence of rounding errors and this must be taken into account when forming statistical conclusions. Following the recommendations when analyzing data will ensure the correctness of statistical conclusions and increase their validity. It is designed for specialists who, in one way or another, encounter in their activities issues of statistical data analysis, processing of experimental results, the use of statistical methods to analyze various aspects and trends of the surrounding reality. It will be useful for engineers, researchers, specialists of various profiles (physicians, biologists, sociologists, economists, etc.), university teachers, graduate students and students.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Rounding error analysis"

1

Isychev, Anastasia, and Eva Darulova. "Scaling up Roundoff Analysis of Functional Data Structure Programs." In Static Analysis, 371–402. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44245-2_17.

Full text
Abstract:
AbstractFloating-point arithmetic is counter-intuitive due to inherent rounding errors that potentially occur at every arithmetic operation. A selection of automated tools now exists to ensure correctness of floating-point programs by computing guaranteed bounds on rounding errors at the end of a computation, but these tools effectively consider only straight-line programs over scalar variables. Much of numerical codes, however, use data structures such as lists, arrays or matrices and loops over these. To analyze such programs today, all data structure operations need to be unrolled, manually or by the analyzer, reducing the analysis to straight-line code, ultimately limiting the analyzers’ scalability.We present the first rounding error analysis for numerical programs written over vectors and matrices that leverages the data structure information to speed up the analysis. We facilitate this with our functional domain-specific input language that we design based on a new set of numerical benchmarks that we collect from a variety of domains. Our DSL explicitly carries semantic information that is useful for avoiding duplicate and thus unnecessary analysis steps, as well as enabling abstractions for further speed-ups. Compared to unrolling-based approaches in state-of-the-art tools, our analysis retains adequate accuracy and is able to analyze more benchmarks or is significantly faster, and particularly scales better for larger programs.
APA, Harvard, Vancouver, ISO, and other styles
2

Hartmanns, Arnd. "Correct Probabilistic Model Checking with Floating-Point Arithmetic." In Tools and Algorithms for the Construction and Analysis of Systems, 41–59. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-99527-0_3.

Full text
Abstract:
AbstractProbabilistic model checking computes probabilities and expected values related to designated behaviours of interest in Markov models. As a formal verification approach, it is applied to critical systems; thus we trust that probabilistic model checkers deliver correct results. To achieve scalability and performance, however, these tools use finite-precision floating-point numbers to represent and calculate probabilities and other values. As a consequence, their results are affected by rounding errors that may accumulate and interact in hard-to-predict ways. In this paper, we show how to implement fast and correct probabilistic model checking by exploiting the ability of current hardware to control the direction of rounding in floating-point calculations. We outline the complications in achieving correct rounding from higher-level programming languages, describe our implementation as part of the Modest Toolset’s model checker, and exemplify the tradeoffs between performance and correctness in an extensive experimental evaluation across different operating systems and CPU architectures.
APA, Harvard, Vancouver, ISO, and other styles
3

"Part IV Rounding Error." In Numerical Analysis, 167. Society for Industrial and Applied Mathematics, 1990. http://dx.doi.org/10.1137/1.9781611971323.pt4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

"9. Rounding Error for Gaussian Elimination." In Numerical Analysis, 169–93. Society for Industrial and Applied Mathematics, 1990. http://dx.doi.org/10.1137/1.9781611971323.ch9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Frechtling, Michael, and Philip H. W. Leong. "An FPGA-Based Floating Point Unit for Rounding Error Analysis." In Transforming Reconfigurable Systems, 39–56. IMPERIAL COLLEGE PRESS, 2015. http://dx.doi.org/10.1142/9781783266975_0003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Garoche, Pierre-Loïc. "Floating-point Semantics of Analyzed Programs." In Formal Verification of Control System Software, 167–90. Princeton University Press, 2019. http://dx.doi.org/10.23943/princeton/9780691181301.003.0009.

Full text
Abstract:
This chapter focuses on floating-point semantics. It first outlines these semantics. The chapter then revisits previous results and adapts them to account for floating-point computations, assuming a bound on the rounding error is provided. A last part focuses on the approaches to bound these imprecisions, over-approximating the floating-point errors. Here, provided bounds on each variable, computing the floating-point error can be performed with classical interval-based analysis. Kleene-based iterations with interval abstract domain provide the appropriate framework to compute such bounds. This is even simpler in this setting because of the focus on bounding the floating-point error on a single call of the dynamic system transition function, that is, a single loop body execution without internal loops.
APA, Harvard, Vancouver, ISO, and other styles
7

Olver, F. W. J. "Rounding errors in algebraic processes¬ in level-index arithmetic." In Reliable Numerical Commputation, 197–206. Oxford University PressOxford, 1990. http://dx.doi.org/10.1093/oso/9780198535645.003.0012.

Full text
Abstract:
Abstract The level-index number system represents numbers in a computer by their repeated logarithms. Its chief advantage is closure in finite¬ precision arithmetic, thereby eradicating the problems of overflow and underflow. This talk indicates how a Wilkinson-type running error analysis can be carried out in the new system.
APA, Harvard, Vancouver, ISO, and other styles
8

Kurz, V., and F. Stummel. "Rounding Error Analysis of Elimination Methods for Unsymmetric Two-Point Boundary Value Problems." In Zeitschrift für Angewandte Mathematik und Mechanik Volume 66, Number 5, 415–17. De Gruyter, 1986. http://dx.doi.org/10.1515/9783112550946-063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Earl, Richard. "Should I believe my computer?" In Mathematical Analysis: A Very Short Introduction, 65–82. Oxford University PressOxford, 2023. http://dx.doi.org/10.1093/actrade/9780198868910.003.0004.

Full text
Abstract:
Abstract Scientists use mathematics in much of their work, but the theories of an ideal mathematical world cannot immediately be brought to bear on the real one. A sense of reality can only be achieved via experimentation, but how does a scientist move from a collection of experimental data to wholly defined functions? A likely further problem is that—once posed—a real-world problem won’t have an exact answer. How do scientists and mathematicians gain approximate answers? These are topics within ‘numerical analysis’, the theme of ‘Should I believe my computer?’ This question highlights that an algorithm may find an approximate answer by running for thousands of steps; given the problems of rounding errors, how much should we believe a computer’s answer?
APA, Harvard, Vancouver, ISO, and other styles
10

Steiner, Erich. "Numerical methods." In The Chemistry Maths Book. Oxford University Press, 2008. http://dx.doi.org/10.1093/hesc/9780199205356.003.0020.

Full text
Abstract:
This chapter focuses on the method for obtaining the solution of a mathematical problem in the form of numbers. It discusses the general principles underlying some of the more important numerical methods and treats the simplest ones in detail. The discussion emphasizes that nearly all numerical operations are necessarily accompanied by errors, and the analysis of these errors is an integral part of any numerical method. The chapter lays down the three ways by which errors in numerical computations arise: through mistakes (e.g., bug in a computer program, incorrectly calibrated apparatus), mathematical truncation errors, and rounding errors. It also tackles the solution of ordinary equations, describing the bisection method and Newton-Raphson method. Furthermore, it explains polynomial, linear, quadratic, and spline interpolations. The chapter also explains the Gauss elimination for the solution of linear equations, and the Gauss–Jordan elimination for the inverse of a matrix.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Rounding error analysis"

1

Kellison, Ariel, Mohit Tekriwal, Jean-Baptiste Jeannin, and Geoffrey Hulette. "Towards Verified Rounding Error Analysis for Stationary Iterative Methods." In 2022 IEEE/ACM Sixth International Workshop on Software Correctness for HPC Applications (Correctness). IEEE, 2022. http://dx.doi.org/10.1109/correctness56720.2022.00007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Holý, Vladimír. "How big is the rounding error in financial high-frequency data?" In INTERNATIONAL CONFERENCE OF NUMERICAL ANALYSIS AND APPLIED MATHEMATICS (ICNAAM 2017). Author(s), 2018. http://dx.doi.org/10.1063/1.5044146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Iwahashi, Masahiro, and Hitoshi Kiya. "Finite word length error analysis based on basic formula of rounding operation." In 2008 International Symposium on Intelligent Signal Processing and Communications Systems (ISPACS 2008). IEEE, 2009. http://dx.doi.org/10.1109/ispacs.2009.4806763.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wei, Ming, Yonghong Wang, and Huafen Song. "Sensitivity Analysis and Numerical Stability Analysis of the Algorithms for Predicting the Performance of Turbines." In ASME Turbo Expo 2013: Turbine Technical Conference and Exposition. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/gt2013-94482.

Full text
Abstract:
Sensitivity and numerical stability of an algorithm are two of the most important criteria to evaluate its performance. For all published turbine flow models, except Wang method, can be named the ‘top-down’ method (TDM) in which the performance of turbines is calculated from the first stage to the last stage row by row; only Wang method originally proposed by Yonghong Wang can be named the ‘bottom-up’ method (BUM) in which the performance of turbines is calculated from the last stage to the first stage row by row. To find the reason why the stability of the two methods is of great difference, the Wang flow model is researched. The model readily applies to TDM and BUM. How the stability of the two algorithms affected by input error and rounding error is analyzed, the error propagation and distribution in the two methods are obtained. In order to explain the problem more intuitively, the stability of the two methods is described by geometrical ideas. To compare with the known data, the performance of a particular type of turbine is calculated through a series of procedures based on the two algorithms. The results are as follows. The closer the operating point approaches the critical point, the poorer the stability of TDM is. The poor stability can even cause failure in the calculation of TDM. However BUM has not only good stability, but also high accuracy. The result provides an accurate and reliable method (BUM) for estimating the performance of turbines, and it can apply to all one-dimensional performance calculation method for turbine.
APA, Harvard, Vancouver, ISO, and other styles
5

Gao, Ming, and Ravi Krishnamurthy. "Investigate Performance of Current In-Line Inspection Technologies for Dents and Dent Associated With Metal Loss Damage Detection." In 2010 8th International Pipeline Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/ipc2010-31409.

Full text
Abstract:
Integrity management of dent and dent associated with metal loss requires knowledge of in-line inspection (ILI) technologies, government regulations and industry codes, prescriptive requirements, and most importantly assessment models to estimate severity of the mechanical damage. The assessment models have greatly relied on the assumed capabilities of current ILI technologies to detect, discriminate and size the mechanical damage. Therefore, an investigation of the current ILI technologies and validation of their capabilities are practically important. In this paper, the current status of ILI technologies for dent and dent with metal loss is reviewed. Validation data provided by ILI inspection vendors and pipeline operators are analyzed in terms of probability of detection (POD), probability of identification (POI), probability of false call (POFC), and sizing accuracy using binomial probability distribution and confidence interval methods. Linear regression analysis is also performed to determine sizing error bands. High resolution pull test data validated with LaserScan 3-D mapping technology is used to demonstrate a better evaluation of ILI performance with minimized in-ditch measurement errors and the effect of change in dent geometry and dimension due to re-bounding and re-rounding. Issues associated with field measurement and improvement are discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Neuhäuser, Karl, and Rudibert King. "Robust Active Flow Control of a Stator Cascade With Integer Control Functions and Sum-Up Rounding." In ASME Turbo Expo 2019: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/gt2019-91249.

Full text
Abstract:
Abstract This work is part of a research initiative that aims at increasing the overall gas turbine efficiency by means of constant volume combustion (CVC). For that purpose, flow control in the compressor becomes important, since unsteady combustion effects that may occur in a CVC are very likely to affect stability and efficiency of the compressor negatively due to flow disturbances. Active Flow Control (AFC) often has to deal with uncertain flow conditions, e.g., due to turbulence, varying operating ranges, or simply environmental effects. By that, system parameters such as gain or time constants of the system model also become uncertain, making it difficult for control algorithms to ensure optimality or even stable behavior. Robust control in the sense of ℋ∞ control tackles these problems using an uncertainty description and a nominal model of the system. In this contribution, robust control applied to a linear stator cascade is addressed when only a binary control output from solenoid valves is available. Moreover, a surrogate control variable is proposed, describing the extent of the velocity deficit. By means of a principal component analysis, this control variable is reconstructed from a single measurement input. AFC is realized via trailing edge blowing. In comparison to proportional valves, solenoid valves are cheaper and offer faster switching times with the drawback of a restricted range of the control output to integer or even binary values. Since the ℋ∞ controller, as well as most other control algorithms, results in a real-valued signal u(t) ∈ ℝ, a sum-up rounding strategy is applied to the controller output, forming a binary control output ub (t) ∈ {0, 1}. Although it is impossible for the two outputs to completely match, unless both are integer-valued, there is proof that the difference of real-valued to binary output is bounded in its integral value. The investigations show that a switching frequency of the valves of 100 Hz is sufficient to ensure that the control error via binary control matches its expected equivalent via real-valued control for the presented system.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography