Zeitschriftenartikel zum Thema „Rounding error analysis“

Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Rounding error analysis.

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Rounding error analysis" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Connolly, Michael P., und Nicholas J. Higham. „Probabilistic Rounding Error Analysis of Householder QR Factorization“. SIAM Journal on Matrix Analysis and Applications 44, Nr. 3 (28.07.2023): 1146–63. http://dx.doi.org/10.1137/22m1514817.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Kolomys, Olena, und Liliya Luts. „Algorithm for Calculating Primary Spectral Density Estimates Using FFT and Analysis of its Accuracy“. Cybernetics and Computer Technologies, Nr. 2 (30.09.2022): 52–57. http://dx.doi.org/10.34229/2707-451x.22.2.5.

Der volle Inhalt der Quelle
Annotation:
Introduction. Fast algorithms for solving problems of spectral and correlation analysis of random processes began to appear mainly after 1965, when the algorithm of fast Fourier transform (FFT) entered computational practice. With its appearance, a number of computational algorithms for the accelerated solution of some problems of digital signal processing were developed, speed-efficient algorithms for calculating such estimates of probabilistic characteristics of control objects as estimates of convolutions, correlation functions, spectral densities of stationary and some types of non-stationary random processes were built. The purpose of the article is to study a speed-efficient algorithm for calculating the primary estimate of the spectral density of stationary ergodic random processes with zero mean. Most often, the direct Fourier transform method using the FFT algorithm, is used to calculate it. The article continues the research and substantiation of this method in the direction of obtaining better estimates of rounding errors. Results. The research and substantiation of the method in the direction of obtaining more qualitative estimates of rounding errors, taking into account the errors of the input information specification, has been continued. The main characteristics of the given algorithm for calculating the primary estimate of the spectral density are accuracy and computational complexity. The main attention is paid to obtaining error estimates accompanying the process of calculating the primary estimate of the spectral density. The estimates of the rounding error and ineradicable error of the given algorithm for calculating the primary estimate of the spectral density, which appear during the implementation of the algorithm for the classical rounding rule for calculation in floating-point mode with τ digits in the mantissa of the number, taking into account the input error, are obtained. Conclusions. The obtained results make it possible to diagnose the quality of the solution to the problem of calculating the primary estimate of the spectral density of stationary ergodic random processes with a zero mean value by the described method and to choose the parameters of the algorithm that will ensure the required accuracy of the approximate solution of the problem. Keywords: primary estimation of spectral density, fast Fourier transform, discrete Fourier transform, rounding error, input error.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Connolly, Michael P., Nicholas J. Higham und Theo Mary. „Stochastic Rounding and Its Probabilistic Backward Error Analysis“. SIAM Journal on Scientific Computing 43, Nr. 1 (Januar 2021): A566—A585. http://dx.doi.org/10.1137/20m1334796.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Cuyt, Annie, und Paul Van der Cruyssen. „Rounding error analysis for forward continued fraction algorithms“. Computers & Mathematics with Applications 11, Nr. 6 (Juni 1985): 541–64. http://dx.doi.org/10.1016/0898-1221(85)90037-9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Higham, Nicholas J., und Theo Mary. „A New Approach to Probabilistic Rounding Error Analysis“. SIAM Journal on Scientific Computing 41, Nr. 5 (Januar 2019): A2815—A2835. http://dx.doi.org/10.1137/18m1226312.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Zou, Qinmeng. „Probabilistic Rounding Error Analysis of Modified Gram–Schmidt“. SIAM Journal on Matrix Analysis and Applications 45, Nr. 2 (21.05.2024): 1076–88. http://dx.doi.org/10.1137/23m1585817.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Mezzarobba, Marc. „Rounding error analysis of linear recurrences using generating series“. ETNA - Electronic Transactions on Numerical Analysis 58 (2023): 196–227. http://dx.doi.org/10.1553/etna_vol58s196.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Kiełbasiński, Andrzej. „A note on rounding-error analysis of Cholesky factorization“. Linear Algebra and its Applications 88-89 (April 1987): 487–94. http://dx.doi.org/10.1016/0024-3795(87)90121-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Journal, Baghdad Science. „A Note on the Perturbation of arithmetic expressions“. Baghdad Science Journal 13, Nr. 1 (06.03.2016): 190–97. http://dx.doi.org/10.21123/bsj.13.1.190-197.

Der volle Inhalt der Quelle
Annotation:
In this paper we present the theoretical foundation of forward error analysis of numerical algorithms under;• Approximations in "built-in" functions.• Rounding errors in arithmetic floating-point operations.• Perturbations of data.The error analysis is based on linearization method. The fundamental tools of the forward error analysis are system of linear absolute and relative a prior and a posteriori error equations and associated condition numbers constituting optimal of possible cumulative round – off errors. The condition numbers enable simple general, quantitative bounds definitions of numerical stability. The theoretical results have been applied a Gaussian elimination, and have proved to be very effective means of both a priori and a posteriori error analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Rudikov, D. A., und A. S. Ilinykh. „Error analysis of the cutting machine step adjustable drive“. Journal of Physics: Conference Series 2131, Nr. 2 (01.12.2021): 022046. http://dx.doi.org/10.1088/1742-6596/2131/2/022046.

Der volle Inhalt der Quelle
Annotation:
Abstract The implementation precision of a number of adjustment bodies of a metal-cutting machine is also the most important indicator of its quality, a strictly standardized industry standard, technical conditions for manufacturing and acceptance. Moreover, the standard for limiting the error is set depending on the used denominator of the series. An essential feature of the precision of the series being implemented is that it is determined not by an error in parts’ manufacturing, but by the disadvantages of the used method of kinematic calculation. The established modes largely determine the efficiency of processing on metal-cutting machines. If the setting is set to an underestimated mode, then the performance is reduced accordingly. In the case of the mode overestimation, this leads to a decrease in durability and losses due to increased regrinding and tool changes. Creation of a complex of mathematical models for the design kinematic calculation of the metal-cutting machines’ main movement drive, which allows reducing the error in the implementation of a series of preferred numbers and increasing machining precision. The article provides a mathematical complex for analyzing the total error components, which allows determining and evaluating the total error of the drive of a metal-cutting machine by analyzing its constituent values with high precision: errors of a permanent part, errors of a multiplier part, rounding errors of standard numbers, errors in the electric motor and belt transmission. The presented complex helps to identify the role of the rounding error of preferred numbers in the total relative error formation and makes it possible to reduce it, which allows solving the problem of increasing the step adjustable drive precision. When using a mathematical complex, a fundamentally new opportunity for creating a scientific base appears, developing algorithms and programs for engineering calculation of tables that facilitate the selection of the numbers of teeth for multiple groups, structures and guaranteeing high precision of the implemented series.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Paxton, E. Adam, Matthew Chantry, Milan Klöwer, Leo Saffin und Tim Palmer. „Climate Modeling in Low Precision: Effects of Both Deterministic and Stochastic Rounding“. Journal of Climate 35, Nr. 4 (15.02.2022): 1215–29. http://dx.doi.org/10.1175/jcli-d-21-0343.1.

Der volle Inhalt der Quelle
Annotation:
Abstract Motivated by recent advances in operational weather forecasting, we study the efficacy of low-precision arithmetic for climate simulations. We develop a framework to measure rounding error in a climate model, which provides a stress test for a low-precision version of the model, and we apply our method to a variety of models including the Lorenz system, a shallow water approximation for flow over a ridge, and a coarse-resolution spectral global atmospheric model with simplified parameterizations (SPEEDY). Although double precision [52 significant bits (sbits)] is standard across operational climate models, in our experiments we find that single precision (23 sbits) is more than enough and that as low as half precision (10 sbits) is often sufficient. For example, SPEEDY can be run with 12 sbits across the code with negligible rounding error, and with 10 sbits if minor errors are accepted, amounting to less than 0.1 mm (6 h)−1 for average gridpoint precipitation, for example. Our test is based on the Wasserstein metric and this provides stringent nonparametric bounds on rounding error accounting for annual means as well as extreme weather events. In addition, by testing models using both round-to-nearest (RN) and stochastic rounding (SR) we find that SR can mitigate rounding error across a range of applications, and thus our results also provide some evidence that SR could be relevant to next-generation climate models. Further research is needed to test if our results can be generalized to higher resolutions and alternative numerical schemes. However, the results open a promising avenue toward the use of low-precision hardware for improved climate modeling. Significance Statement Weather and climate models provide vital information for decision-making, and will become ever more important in the future with a changed climate and more extreme weather. A central limitation to improved models are computational resources, which is why some weather forecasters have recently shifted from conventional 64-bit to more efficient 32-bit computations, which can provide equally accurate forecasts. Climate models, however, still compute in 64 bits, and adapting to lower precision requires a detailed analysis of rounding errors. We develop methods to quantify rounding error in a climate model, and find similar precision acceptable across weather and climate models, with even 16 bits often sufficient for an accurate climate. This opens a promising avenue for computational efficiency gains in climate modeling.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Yang, L. Minah, Alyson Fox und Geoffrey Sanders. „Rounding Error Analysis of Mixed Precision Block Householder QR Algorithms“. SIAM Journal on Scientific Computing 43, Nr. 3 (Januar 2021): A1723—A1753. http://dx.doi.org/10.1137/19m1296367.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Neumaier, A. „Inner product rounding error analysis in the presence of underflow“. Computing 34, Nr. 4 (Dezember 1985): 365–73. http://dx.doi.org/10.1007/bf02251835.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Giraud, Luc, Julien Langou, Miroslav Rozložník und Jasper van den Eshof. „Rounding error analysis of the classical Gram-Schmidt orthogonalization process“. Numerische Mathematik 101, Nr. 1 (25.05.2005): 87–100. http://dx.doi.org/10.1007/s00211-005-0615-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Iiguni, Youji, Hideaki Sakai und Hidekatsu Tokumaru. „Rounding error analysis of the triangular lattice and escalator algorithms“. Electronics and Communications in Japan (Part I: Communications) 71, Nr. 4 (April 1988): 15–25. http://dx.doi.org/10.1002/ecja.4410710402.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Holstein, Horst, und Ben Ketteridge. „Gravimetric analysis of uniform polyhedra“. GEOPHYSICS 61, Nr. 2 (März 1996): 357–64. http://dx.doi.org/10.1190/1.1443964.

Der volle Inhalt der Quelle
Annotation:
Analytical formulas for the gravity anomaly of a uniform polyhedral body are subject to numerical error that increases with distance from the target, while the anomaly decreases. This leads to a limited range of target distances in which the formulas are operational, beyond which the calculations are dominated by rounding error. We analyze the sources of error and propose a combination of numerical and analytical procedures that exhibit advantages over existing methods, namely (1) errors that diminish with distance, (2) enhanced operating range, and (3) algorithmic simplicity. The latter is achieved by avoiding the need to transform coordinates and the need to discriminate between projected observation points that lie inside, on, or outside a target facet boundary. Our error analysis is verified in computations based on a published code and on a code implementing our methods. The former requires a numerical precision of one part in [Formula: see text] (double precision) in problems of geophysical interest, whereas our code requires a precision of one part in [Formula: see text] (single precision) to give comparable results, typically in half the execution time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Wang, Miao, Weifeng Liu und Chenglin Wen. „A High-Order Kalman Filter Method for Fusion Estimation of Motion Trajectories of Multi-Robot Formation“. Sensors 22, Nr. 15 (26.07.2022): 5590. http://dx.doi.org/10.3390/s22155590.

Der volle Inhalt der Quelle
Annotation:
Multi-robot motion and observation generally have nonlinear characteristics; in response to the problem that the existing extended Kalman filter (EKF) algorithm used in robot position estimation only considers first-order expansion and ignores the higher-order information, this paper proposes a multi-robot formation trajectory based on the high-order Kalman filter method. The joint estimation method uses Taylor expansion of the state equation and observation equation and introduces remainder variables on this basis, which effectively improves the estimation accuracy. In addition, the truncation error and rounding error of the filtering algorithm before and after the introduction of remainder variables, respectively, are compared. Our analysis shows that the rounding error is much smaller than the truncation error, and the nonlinear estimation performance is greatly improved.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Lu, Mi. „Runtime accuracy alterable approximate floatingpoint multipliers“. International Robotics & Automation Journal 8, Nr. 2 (2022): 52–56. http://dx.doi.org/10.15406/iratj.2022.08.00244.

Der volle Inhalt der Quelle
Annotation:
Modern systems demand high computational power within limited resources. Approximate computing is a promising approach to design arithmetic units with tight resources for error-tolerant applications such as image and signal processing and computer vision. A floating-point multiplier is one of the arithmetic units with the highest complexity in such applications. Designing a floating-point multiplier based on the approximate computing technique can reduce its complexity as well as increase performance and energy efficiency. However, an unknown error rate for upcoming input data is problematic to design appropriate approximate multipliers. The existing solution is to utilize an error estimator relying on statistical analysis. In this paper, we propose new approximate floating-point multipliers based on an accumulator and reconfigurable adders with an error estimator. Unlike previous designs, our proposed designs are able to change the levels of accuracy at runtime. Thus, we can make errors distributed more evenly. In contrast to other designs, our proposed design can maximize the performance gain since reconfigurable multipliers are able to operate two multiplications in parallel once the low accuracy mode is selected. Furthermore, we apply a simple rounding technique to approximate floating-point multipliers for additional improvement. Our simulation results reveal that our new method can reduce area by 70.98% when error tolerance margin of our target application is 5%, and when its error tolerance margin is 3%, our rounding enhanced simple adders-based approximate multiplier can save area by 65.9%, and our reconfigurable adder-based approximate multiplier with rounding can save the average delay and energy by 54.95% and 46.67% respectively compared to an exact multiplier.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Singer, Sanja, und Saša Singer. „Rounding-error and perturbation bounds for the indefinite QR factorization“. Linear Algebra and its Applications 309, Nr. 1-3 (April 2000): 103–19. http://dx.doi.org/10.1016/s0024-3795(99)00156-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Singer, Sanja, und Saša Singer. „Rounding error and perturbation bounds for the symplectic QR factorization“. Linear Algebra and its Applications 358, Nr. 1-3 (Januar 2003): 255–79. http://dx.doi.org/10.1016/s0024-3795(02)00263-x.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Cao, Zhi-Hao. „Rounding error analysis of two-stage iterative methods for large linear systems“. Applied Mathematics and Computation 139, Nr. 2-3 (Juli 2003): 371–81. http://dx.doi.org/10.1016/s0096-3003(02)00201-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Yang, Fan, Songlin Liu, Yijun Lai, Jiayuan Hu und Shaohua Wang. „An Improved Magnetic Field Method to Locate the Grounding Conductor“. Sensors 23, Nr. 8 (11.04.2023): 3879. http://dx.doi.org/10.3390/s23083879.

Der volle Inhalt der Quelle
Annotation:
The location of the grounding grid conductors is critical for performing corrosion diagnosis and maintenance work. An improved magnetic field differential method to locate the unknown grounding grid based on truncation errors and the round-off errors analysis is presented in this paper. It was proven that a different order of the magnetic field derivative can be used to determine the position of the grounding conductor according to the peak value of the derivative. Due to the accumulative error of higher differentiation, the truncation error and rounding error were used to analyze to accumulative error and to determine the optimal step size to measure and calculate the higher differentiation. The possible range and probability distribution of the two kinds of errors at each order are described, and the index of peak position error was derived, which can be used to locate the grounding conductor in the power substation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Korzilius, Stan, und Berry Schoenmakers. „Divisions and Square Roots with Tight Error Analysis from Newton–Raphson Iteration in Secure Fixed-Point Arithmetic“. Cryptography 7, Nr. 3 (12.09.2023): 43. http://dx.doi.org/10.3390/cryptography7030043.

Der volle Inhalt der Quelle
Annotation:
In this paper, we present new variants of Newton–Raphson-based protocols for the secure computation of the reciprocal and the (reciprocal) square root. The protocols rely on secure fixed-point arithmetic with arbitrary precision parameterized by the total bit length of the fixed-point numbers and the bit length of the fractional part. We perform a rigorous error analysis aiming for tight accuracy claims while minimizing the overall cost of the protocols. Due to the nature of secure fixed-point arithmetic, we perform the analysis in terms of absolute errors. Whenever possible, we allow for stochastic (or probabilistic) rounding as an efficient alternative to deterministic rounding. We also present a new protocol for secure integer division based on our protocol for secure fixed-point reciprocals. The resulting protocol is parameterized by the bit length of the inputs and yields exact results for the integral quotient and remainder. The protocol is very efficient, minimizing the number of secure comparisons. Similarly, we present a new protocol for integer square roots based on our protocol for secure fixed-point square roots. The quadratic convergence of the Newton–Raphson method implies a logarithmic number of iterations as a function of the required precision (independent of the input value). The standard error analysis of the Newton–Raphson method focuses on the termination condition for attaining the required precision, assuming sufficiently precise floating-point arithmetic. We perform an intricate error analysis assuming fixed-point arithmetic of minimal precision throughout and minimizing the number of iterations in the worst case.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Chesnel, Lucas, Xavier Claeys und Sergei A. Nazarov. „Oscillating behaviour of the spectrum for a plasmonic problem in a domain with a rounded corner“. ESAIM: Mathematical Modelling and Numerical Analysis 52, Nr. 4 (Juli 2018): 1285–313. http://dx.doi.org/10.1051/m2an/2016080.

Der volle Inhalt der Quelle
Annotation:
We investigate the eigenvalue problem −div(σ∇u) = λu (P) in a 2D domain Ω divided into two regions Ω±. We are interested in situations where σ takes positive values on Ω+ and negative ones on Ω−. Such problems appear in time harmonic electromagnetics in the modeling of plasmonic technologies. In a recent work [L. Chesnel, X. Claeys and S.A. Nazarov, Asymp. Anal. 88 (2014) 43–74], we highlighted an unusual instability phenomenon for the source term problem associated with (P): for certain configurations, when the interface between the subdomains Ω± presents a rounded corner, the solution may depend critically on the value of the rounding parameter. In the present article, we explain this property studying the eigenvalue problem (P). We provide an asymptotic expansion of the eigenvalues and prove error estimates. We establish an oscillatory behaviour of the eigenvalues as the rounding parameter of the corner tends to zero. We end the paper illustrating this phenomenon with numerical experiments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Gemignani, Luca, und Grazia Lotti. „Rounding Error Analysis in Solving M-Matrix Linear Systems of Block Hessenberg Form“. Numerical Algorithms 36, Nr. 2 (Juni 2004): 157–68. http://dx.doi.org/10.1023/b:numa.0000033130.75020.0e.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Wang, Weijie, und Yanmin Lu. „Analysis of the Mean Absolute Error (MAE) and the Root Mean Square Error (RMSE) in Assessing Rounding Model“. IOP Conference Series: Materials Science and Engineering 324 (März 2018): 012049. http://dx.doi.org/10.1088/1757-899x/324/1/012049.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Maryška, J., M. Rozložnı́k und M. Tůma. „Schur complement reduction in the mixed-hybrid approximation of Darcy's law: rounding error analysis“. Journal of Computational and Applied Mathematics 117, Nr. 2 (Mai 2000): 159–73. http://dx.doi.org/10.1016/s0377-0427(99)00344-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Goede, Simon L., und Melvin Khee-Shing Leow. „General Error Analysis in the Relationship between Free Thyroxine and Thyrotropin and Its Clinical Relevance“. Computational and Mathematical Methods in Medicine 2013 (2013): 1–7. http://dx.doi.org/10.1155/2013/831275.

Der volle Inhalt der Quelle
Annotation:
Background. This treatise investigates error sources in measurements applicable to the hypothalamus-pituitary-thyroid (HPT) system of analysis for homeostatic set point computation. The hypothalamus-pituitary transfer characteristic (HP curve) describes the relationship between plasma free thyroxine [FT4] and thyrotropin [TSH].Objective. We define the origin, types, causes, and effects of errors that are commonly encountered in TFT measurements and examine how we can interpret these to construct a reliable HP function for set point establishment.Design and Methods. The error sources in the clinical measurement procedures are identified and analyzed in relation to the constructed HP model.Results. The main sources of measurement and interpretation uncertainties are (1) diurnal variations in [TSH], (2) TFT measurement variations influenced by timing of thyroid medications, (3) error sensitivity in ranges of [TSH] and [FT4] (laboratory assay dependent), (4) rounding/truncation of decimals in [FT4] which in turn amplify curve fitting errors in the [TSH] domain in the lower [FT4] range, (5) memory effects (rate-independent hysteresis effect).Conclusions. When the main uncertainties in thyroid function tests (TFT) are identified and analyzed, we can find the most acceptable model space with which we can construct the best HP function and the related set point area.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Erna, G., G. Srihari, M. Purna Kishore, Ashok Nayak B. und M. Bharathi. „FPGA Implementation of High-Performance Truncated Rounding based Approximate Multiplier with High-Level Synchronous XOR-MUX Full Adder“. WSEAS TRANSACTIONS ON CIRCUITS AND SYSTEMS 22 (04.12.2023): 111–25. http://dx.doi.org/10.37394/23201.2023.22.13.

Der volle Inhalt der Quelle
Annotation:
In research and development, the most emerging field in digital signal processing and image processing is rounded-based approximated signed and unsigned multipliers. In the present research, we propose some cutting-edge, Preformation, and logic simplification technology connected to processing the Discrete cosine transform (DCT) and Discrete wavelet transform (DWT) images for sharpening. This technology will yield a truncated shifter incorporated with logical XOR-MUX Full adder techniques. A reliable and cost-effective approximate signed and unsigned multiplier was created for the rounding method. While this more advanced technology includes many approximate multipliers, it sacrifices the ability to find the closest integer of a rounded value when combining signed and unsigned capabilities, resulting in higher absolute errors than other approximate multipliers based on rounding. This proposed work will introduce a novel method of Truncated Shifter Rounding-based Approximate Multiplier integrated with a High-Level Synchronous XOR-MUX Full Adder design to minimize the number of logic gates and power consumption in the multiplier architecture. The Truncated RoBA (Rounding-based Approximate Multiplier) with XOR MUX Full Adder will reduce the logic size in the shifter and the arithmetic circuit. The work will modify this rounding-based approximate multiplier to minimize area, delay, and power consumption. This proposed architecture will be integrated with two fundamental changes: firstly, its Barrel shifter method will be replaced with a truncated shifter multiplier with XOR MUX Full Adder, and secondly, the parallel prefix Brent Kung adder will be replaced with a carrying-save adder with XOR MUX Full Adder. Finally, this architecture was designed using Verilog-HDL and synthesized using the Xilinx Vertex-5 FPGA family, targeting the device Xc7Vx485tFFg1157-1. It resulted in a reduction of area LUT (34%), power (1%), delay (32%), and error analysis (75%) when compared to the existing RoBA.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Stolwijk, Jeroen J., und Volker Mehrmann. „Error Analysis and Model Adaptivity for Flows in Gas Networks“. Analele Universitatii "Ovidius" Constanta - Seria Matematica 26, Nr. 2 (01.07.2018): 231–66. http://dx.doi.org/10.2478/auom-2018-0027.

Der volle Inhalt der Quelle
Annotation:
Abstract In the simulation and optimization of natural gas flow in a pipeline network, a hierarchy of models is used that employs different formulations of the Euler equations. While the optimization is performed on piecewise linear models, the flow simulation is based on the one to three dimensional Euler equations including the temperature distributions. To decide which model class in the hierarchy is adequate to achieve a desired accuracy, this paper presents an error and perturbation analysis for a two level model hierarchy including the isothermal Euler equations in semilinear form and the stationary Euler equations in purely algebraic form. The focus of the work is on the effect of data uncertainty, discretization, and rounding errors in the numerical simulation of these models and their interaction. Two simple discretization schemes for the semilinear model are compared with respect to their conditioning and temporal stepsizes are determined for which a well-conditioned problem is obtained. The results are based on new componentwise relative condition numbers for the solution of nonlinear systems of equations. More- over, the model error between the semilinear and the algebraic model is computed, the maximum pipeline length is determined for which the algebraic model can be used safely, and a condition is derived for which the isothermal model is adequate.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Cools, Siegfried, Emrullah Fatih Yetkin, Emmanuel Agullo, Luc Giraud und Wim Vanroose. „Analyzing the Effect of Local Rounding Error Propagation on the Maximal Attainable Accuracy of the Pipelined Conjugate Gradient Method“. SIAM Journal on Matrix Analysis and Applications 39, Nr. 1 (Januar 2018): 426–50. http://dx.doi.org/10.1137/17m1117872.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Baber, William R., und Sok-Hyon Kang. „The Impact of Split Adjusting and Rounding on Analysts' Forecast Error Calculations“. Accounting Horizons 16, Nr. 4 (01.12.2002): 277–89. http://dx.doi.org/10.2308/acch.2002.16.4.277.

Der volle Inhalt der Quelle
Annotation:
This study finds that analysts' forecast data files, commonly used by accountants and financial analysts to estimate market expectations about earnings announcements, contain inaccurate historical data for companies that split their common stock. These inaccuracies result because stock split adjustments are made retrospectively and split-adjusted data are rounded. Moreover, because well-performing firms are more likely to execute stock splits, the consequences of the stock split problem are systematic, potentially distorting both time-series and cross-sectional characteristics of forecast errors. The analysis also demonstrates that the problem can influence interpretations of security price reactions to earnings announcements. To illustrate this point, we report evidence suggesting that errors induced by rounding split-adjusted data alter conclusions about how investors interpret earnings that meet, but do not exceed, the consensus forecast.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Wang, Dong Xing, und Su Chen. „Improved Subdivision Based Halftoning Algorithm“. Advanced Materials Research 267 (Juni 2011): 368–71. http://dx.doi.org/10.4028/www.scientific.net/amr.267.368.

Der volle Inhalt der Quelle
Annotation:
The subdivision based halftoning algorithm works in a statistical way. Analysis of its output shows that, error accumulation and rounding operation may produce big quantization error, and there may be artifacts in its output. Some improvements have been proposed. First, a procedure is added to find cases with big quantization error, and to solve the problem. Second, instead of propagating error sequentially as in the algorithm, propagating error along random directions has been tested to reduce the artifacts in the output. Third, propagating error along a direction, in which the quantization error of each subpart is the minimum, has also been tested to improve the output quality. Halftoning tests show that the improved algorithm produces clearer output than the original.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Guo, Haifeng, Yiyang Wang, Guangwei Wang, Zhongbo Du, Rui Chen und He Sun. „Artificial Intelligence Optimization Design Analysis of Robot Control System“. Journal of Sensors 2022 (25.07.2022): 1–6. http://dx.doi.org/10.1155/2022/2235042.

Der volle Inhalt der Quelle
Annotation:
In order to improve the accuracy of robot control system, a scheme based on artificial intelligence is proposed. On the basis of the software environment of reinforcement learning simulation platform, a kind of rounding scheme in dynamic environment is designed and simulated. The results show that when the inclination sensor is placed on an inclined plane of 300 and collected for ten times, the maximum error of measurement that can be seen from the experimental data is 0.40. The relative included angles were 30°, 45°, 60°, and 90°, respectively, by compass sensor. The measurement was carried out, and the average value of each angle was measured 5 times. It can be seen from the experimental data that the measurement error meets the requirements of the system. Therefore, it is feasible to use artificial intelligence algorithm to optimize the robot control system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Zylstra, Philip J., David B. Lindenmayer und S. Don Bradshaw. „Reply to Comment on ‘Self-thinning forest understoreys reduce wildfire risk, even in a warming climate’“. Environmental Research Letters 19, Nr. 5 (01.05.2024): 058001. http://dx.doi.org/10.1088/1748-9326/ad40c1.

Der volle Inhalt der Quelle
Annotation:
Abstract Our previous analysis of mapped records of forest fires in National Parks in Southwestern Australia showed that fires initiated a pulse in flammability (the likelihood of a point being burned by wildfire), but that flammability declined as forests matured (Zylstra et al 2022 Environ. Res. Lett. 17 044022). This reduction in flammability was contrary to that expected from modelling used by the West Australian Government to guide management, but consistent with expectations from peer-reviewed fire behaviour science and published ecological drivers of fire behaviour. Miller et al (2024 Environ. Res. Lett.) argued that our reported decline in flammability of long-unburnt forest is an artefact of poor data quality including flawed records kept by the West Australian Government, along with fewer and smaller sample sizes in long-unburnt forest. These problems, they claim, biased these age-classes toward values of zero flammability due to a rounding error. Critically, Miller et al (2024 Environ. Res. Lett.) did not test their hypothesis by repeating the analysis with these data removed. Here, we show that Miller et al’s (2024 Environ. Res. Lett.) concerns are dependent upon the mathematical fallacy that rounding errors only occur in one direction (rounding flammability down to zero), when they have an equal likelihood of rounding upward and elevating flammability. The effect of this is to introduce noise rather than bias. We tested their hypothesis by repeating the analysis of Zylstra et al (2022 Environ. Res. Lett. 17 044022) with a better suited statistical method on an improved and expanded dataset after removing the small patches that Miller et al (2024 Environ. Res. Lett.) proposed would bias the findings. Contrary to the objections of Miller et al (2024 Environ. Res. Lett.), removing lower quality data revealed that the mature forests were even less flammable than expected, so that only annual prescribed burning could reduce bushfire likelihood below that in forests unburnt for 56 years or more. Our findings highlight the role of prescribed burning in creating a more flammable landscape.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Lemeshko, Boris Yu, und Stanislav B. Lemeshko. „Nonparametric goodness-of-fit tests for normality testing under rounding-off measurements“. Analysis and data processing systems, Nr. 2 (28.06.2022): 21–38. http://dx.doi.org/10.17212/2782-2001-2022-2-21-38.

Der volle Inhalt der Quelle
Annotation:
When analyzing measurement series in various applications, the verification of whether measurement errors belong to the normal law is considered as a mandatory procedure. For this purpose, various special tests for testing hypotheses about normality can be used; non-parametric tests of goodness or chi-square tests can be used. When using nonparametric goodness-of-fit tests to test normality, it must be taken into account that a complex hypothesis is being tested. When testing a complex hypothesis, the distributions of the statistics of the goodness-of-fit tests differ significantly from the classical ones that occur when testing simple hypotheses. It is known that the presence of rounding errors can significantly change the distribution of test statistics. In such situations, ignoring the fact of influence can lead to incorrect conclusions about the results of the normality test. In metrology, when carrying out high-precision measurements, as a rule, scientists do not even allow thoughts about the possible influence of D rounding errors on the results of statistical analysis. This allows the possibility of incorrect conclusions since there is no influence not only at small D, but at values of D much less than the standard deviation s of the measurement error distribution law and sample sizes n not exceeding some maximum values. For sample sizes larger than these maximum values, the real distributions of the test statistics deviate from the asymptotic ones towards larger statistics values. In this work, based on real and well-known data, using statistical modeling methods, we demonstrate the dependence of the distributions of statistics of nonparametric goodness-of-fit tests when testing normality on the ratio of D and s for specific n. The possibility of correct application of the tests under the influence of rounding errors on the conclusions is shown and implemented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Vijeyakumar, K. N., S. Elango und S. Kalaiselvi. „VLSI Implementation of High Speed Energy-Efficient Truncated Multiplier“. Journal of Circuits, Systems and Computers 27, Nr. 05 (06.02.2018): 1850077. http://dx.doi.org/10.1142/s0218126618500779.

Der volle Inhalt der Quelle
Annotation:
In this brief, we present the design and evaluation of a high speed and energy-efficient truncated multiplier for unsigned multiplication, such that the average absolute error due to truncation and rounding is kept minimal. The proposed algorithm eliminates a few least significant Partial Product (PP) bits and adds correction bias at appropriate PP bit positions to minimize the total error. From the literatures reviewed, it is clear that there is scope for reducing delay in multiplication using sutras of ancient vedic mathematics. This work uses a simple “crosswise and vertical sutra” of Vedic mathematics to generate PP bits. The proposed methodology groups the input into [Formula: see text]/2 bits, eliminates least subgroup multiplication ([Formula: see text]) and deletes few least significant bits in other subgroup multiplications to reduce area and power dissipation. In addition, correction biase are added at appropriate bit positions to reduce the overall absolute error due to the elimination of few PP bits and rounding of final product. Experimental evaluation of the proposed truncated design is carried out through structural level VHDL modeling and simulations using Synopsys design compiler. Performance analysis revealed Chip-Area Ratio (CAR%) to be 33.81% and Power-Delay Product (PDP) of 14.84[Formula: see text]pJ of proposed truncated design for an [Formula: see text] multiplication.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Xu, Chuan Yan, Kang Ding, Zhi Jian Yang und Hui Bin Lin. „Influence of Additive White Gaussian Noise on the Interpolation Method of Discrete Spectrum“. Advanced Materials Research 383-390 (November 2011): 2951–57. http://dx.doi.org/10.4028/www.scientific.net/amr.383-390.2951.

Der volle Inhalt der Quelle
Annotation:
Without noise, the interpolation method of discrete spectrum is accurate except rounding error. However, the estimation accuracy is declined when a signal corrupted with noise, and even turns out to be meaningless. The paper investigates the influence of additive White Gaussian Noise on the accuracy of amplitude and phase based on interpolation method. Analytical expressions of the estimator variance for amplitude and phase correction are derived. Simulation results confirm the validity of the presented analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Rahayu, Santi, und Achmad Hindasyah. „Comparative Analysis of Numerical Integration Solutions Pias Method and Newton Cotes Method Using Python Programming Language“. Mathline : Jurnal Matematika dan Pendidikan Matematika 8, Nr. 4 (15.11.2023): 1321–32. http://dx.doi.org/10.31943/mathline.v8i4.492.

Der volle Inhalt der Quelle
Annotation:
Irregular areas cannot be solved by ordinary calculus formulas, so it is necessary to use numerical methods such as the Quadrature and Newton-Cotes methods. This research compares numerical integration solutions using the Quadrature method (Rectangular and Trapezoidal) and the Newton-Cotes method (Trapezoidal, Simpson 1/3, Simpson 3/8, and Weddle) with the Python programming language. Manual calculation of the first case study on integrals where the smallest error from the numerical method to the analytical method is achieved by the rectangular method of 0,017. In the second case study of tabular data for manual calculations the author only uses the Simpson 3/8 method with an absolute error for an analytical area of 1.418,583 km2. Whereas in the Python application for the first case study the smallest error was achieved by the Simpson 1/3 and Simpson 3/8 methods with an error of 0, in the sense that these two methods are very accurate to the actual analysis results. In the second case study the smallest error was achieved by the Simpson 1/3 method of 1.039,365 km2. The difference between manual calculations and application results is due to decimal rounding, and the linspace(a,b,n+1) function in the numpy library.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Seo, Jung-In, Young Eun Jeon und Suk-Bok Kang. „New Approach for a Weibull Distribution under the Progressive Type-II Censoring Scheme“. Mathematics 8, Nr. 10 (05.10.2020): 1713. http://dx.doi.org/10.3390/math8101713.

Der volle Inhalt der Quelle
Annotation:
This paper proposes a new approach based on the regression framework employing a pivotal quantity to estimate unknown parameters of a Weibull distribution under the progressive Type-II censoring scheme, which provides a closed form solution for the shape parameter, unlike its maximum likelihood estimator counterpart. To resolve serious rounding errors for the exact mean and variance of the pivotal quantity, two different types of Taylor series expansion are applied, and the resulting performance is enhanced in terms of the mean square error and bias obtained through the Monte Carlo simulation. Finally, an actual application example, including a simple goodness-of-fit analysis of the actual test data based on the pivotal quantity, proves the feasibility and applicability of the proposed approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

de Camargo, André Pierro. „Rounding error analysis of divided differences schemes: Newton’s divided differences; Neville’s algorithm; Richardson extrapolation; Romberg quadrature; etc.“ Numerical Algorithms 85, Nr. 2 (12.11.2019): 591–606. http://dx.doi.org/10.1007/s11075-019-00828-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Noeiaghdam, Samad, Aliona Dreglea, Jihuan He, Zakieh Avazzadeh, Muhammad Suleman, Mohammad Ali Fariborzi Araghi, Denis N. Sidorov und Nikolai Sidorov. „Error Estimation of the Homotopy Perturbation Method to Solve Second Kind Volterra Integral Equations with Piecewise Smooth Kernels: Application of the CADNA Library“. Symmetry 12, Nr. 10 (20.10.2020): 1730. http://dx.doi.org/10.3390/sym12101730.

Der volle Inhalt der Quelle
Annotation:
This paper studies the second kind linear Volterra integral equations (IEs) with a discontinuous kernel obtained from the load leveling and energy system problems. For solving this problem, we propose the homotopy perturbation method (HPM). We then discuss the convergence theorem and the error analysis of the formulation to validate the accuracy of the obtained solutions. In this study, the Controle et Estimation Stochastique des Arrondis de Calculs method (CESTAC) and the Control of Accuracy and Debugging for Numerical Applications (CADNA) library are used to control the rounding error estimation. We also take advantage of the discrete stochastic arithmetic (DSA) to find the optimal iteration, optimal error and optimal approximation of the HPM. The comparative graphs between exact and approximate solutions show the accuracy and efficiency of the method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Liu, Zhao-Guang, Sheng-Yong Du, Yang Yang und Xiu-Hua Ji. „A fast algorithm for color space conversion and rounding error analysis based on fixed-point digital signal processors“. Computers & Electrical Engineering 40, Nr. 4 (Mai 2014): 1405–14. http://dx.doi.org/10.1016/j.compeleceng.2013.01.011.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Protsko, I. O., und D. V. Ostrovka. „ANALYSIS OF THE ERROR OF COMPUTATION FAST TRANSFORMS OF FOURIER CLASS BASED ON CYCLIC CONVOLUTIONS“. Ukrainian Journal of Information Technology 2, Nr. 1 (2020): 52–56. http://dx.doi.org/10.23939/ujit2020.02.052.

Der volle Inhalt der Quelle
Annotation:
The features of the computational model of discrete transforms of Fourier class based on cyclic convolutions to determine the algorithmic calculation error are analyzed. Based on the approach of efficient computation of discrete transforms of Fourier class of arbitrary size N, using of a hashing array to transform a discrete basis matrix into a set of block-cyclic submatrices, the components of computational costs are considered. These components of computational costs depend on the type of transform, the size and the block-cycle structure of the transformation core. Examples of computational model and block-cyclic structure of matrices of simplified arguments of basis functions for mutually inverse discrete cosine transforms of types II, III are given. The computational model characterizes the accumulation of rounding errors at the stages of adding input data, computing cyclic convolutions, combining the results of convolutions. Discrete cyclic convolutions can be implemented using fast algorithms or a type of system that corresponds to digital filters with finite pulse characteristics. The possibility of parallel computation of the reduced number of cyclic convolutions makes the analysis of errors insensitive to rearrangement of their computations. The multiplication operations performed when computing the cyclic convolution uses a smaller number of basis coefficients equal to N/4 or N/2 depending on the size of transform. The formats of representation of real numbers in computer systems are considered, which also determine the magnitude of the computational error of transforms. The results of direct and fast computation of discrete cosine transform of type II based on cyclic convolutions with size N=58 in the format wit floating point of double precision and computation error between them are presented. The apriori process of studying the transform errors of the corresponding type and size by the method of mathematical modeling and computational experiment is approximate, which allows to predict the statistical averages of the accuracy of computing the discrete Fourier transform of arbitrary size based on cyclic convolutions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Zhou, H., Y. Zhang, J. Wen und S. Cui. „Mould cooling simulation for injection moulding using a fast boundary element method approach“. Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture 224, Nr. 4 (26.10.2009): 653–62. http://dx.doi.org/10.1243/09544054jem1407.

Der volle Inhalt der Quelle
Annotation:
The existing cooling simulations for injection moulding are mostly based on the boundary element method (BEM). In this paper, a fast BEM approach for mould cooling analysis is developed. The actual problem is decoupled into a one-dimensional transient heat conduction problem within the thin part and a cycle-averaged steady state three-dimensional heat conduction problem of the mould. The BEM is formulated for the solution of the mould heat transfer problem. A dynamic allocation strategy of integral points is proposed when using the Gaussian integral formula to generate the BEM matrix. Considering that the full and unsymmetrical influence matrix of the BEM may lead to great storage space and solution time, this matrix is transformed into a sparse matrix by two methods: the direct rounding method or the combination method. This approximated sparsification approach can reduce the storage memory and solution time significantly. For validation, six typical cases with different element numbers are presented. The results show that the error of the direct rounding method is too large while that of the combination method is acceptable.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Lam, Michael O., und Jeffrey K. Hollingsworth. „Fine-grained floating-point precision analysis“. International Journal of High Performance Computing Applications 32, Nr. 2 (15.06.2016): 231–45. http://dx.doi.org/10.1177/1094342016652462.

Der volle Inhalt der Quelle
Annotation:
Floating-point computation is ubiquitous in high-performance scientific computing, but rounding error can compromise the results of extended calculations, especially at large scales. In this paper, we present new techniques that use binary instrumentation and modification to do fine-grained floating-point precision analysis, simulating any level of precision less than or equal to the precision of the original program. These techniques have an average of 40–70% lower overhead and provide more fine-grained insights into a program’s sensitivity than previous mixed-precision analyses. We also present a novel histogram-based visualization of a program’s floating-point precision sensitivity, as well as an incremental search technique that allows developers to incrementally trade off analysis time for detail, including the ability to restart analyses from where they left off. We present results from several case studies and experiments that show the efficacy of these techniques. Using our tool and its novel visualization, application developers can more quickly determine for specific data sets whether their application could be run using fewer double precision variables, saving both time and memory space.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Gao, Zhenyi, Bin Zhou, Chao Li, Bo Hou, Haobo Sun, Qi Wei und Rong Zhang. „Design and Implementation of a System-on-Chip for Self-Calibration of an Angular Position Sensor“. Applied Sciences 9, Nr. 22 (08.11.2019): 4772. http://dx.doi.org/10.3390/app9224772.

Der volle Inhalt der Quelle
Annotation:
In this study, a novel signal processing algorithm and hardware processing circuit for the self-calibration of angular position sensors is proposed. To calibrate error components commonly found in angular position sensors, a parameter identification algorithm based on the least mean square error demodulation is developed. A processor to run programs and a coprocessor based on the above algorithm are used and designed to form a System-on-Chip, which can calibrate signals as well as implement parameter configuration and control algorithm applications. In order to verify the theoretical validity of the design, analysis and simulation verification of the scheme are carried out, and the maximum absolute error value in the algorithm simulation is reduced to 0.003 %. The circuit’s Register-Transfer Level simulation shows that the maximum absolute value of the angular error is reduced to 0.03%. Simulation results verify the calibration performance with and without quantization and rounding error, respectively. The entire system is prototyped on a Field Programmable Gate Array and tested on a Capacitive Angular Position Sensor. The proposed scheme can reduce the absolute value of angular error to 4.36%, compared to 7.68% from the experimental results of a different calibration scheme.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Dhawalikar, Mahesh Narayan, V. Mariappan, P. K. Srividhya und Vishal Kurtikar. „Multi-state failure phenomenon and analysis using semi-Markov model“. International Journal of Quality & Reliability Management 35, Nr. 9 (01.10.2018): 2080–91. http://dx.doi.org/10.1108/ijqrm-01-2016-0001.

Der volle Inhalt der Quelle
Annotation:
Purpose Degraded failures and sudden critical failures are quite prevalent in industries. Degradation processes commonly belong to Weibull family and critical failures are found to follow exponential distribution. Therefore, it becomes important to carry out reliability and availability analysis of such systems. From the reported literature, it is learnt that models are available for the situations where the degraded failures as well as critical failures follow exponential distribution. The purpose of this paper is to present models suitable for reliability and availability analysis of systems where the degradation process follows Weibull distribution and critical failures follow exponential distribution. Design/methodology/approach The research uses Semi-Markov modeling using the approach of method of stages which is suitable when the failure processes follow Weibull distribution. The paper considers various states of the system and uses state transition diagram to present the transition of the system among good state, degraded state and failed state. Method of stages is used to convert the semi-Markov model to Markov model. The number of stages calculated in Method of stages is usually not an integer value which needs to be round off. Method of stages thus suffers from the rounding off error. A unique approach is proposed to arrive at failure rates to reduce the error in method of stages. Periodic inspection and repairs of systems are commonly followed in industries to take care of system degradation. This paper presents models to carry out reliability and availability analysis of the systems including the case where degraded failures can be arrested by appropriate inspection and repair. Findings The proposed method for estimating the degraded failure rate can be used to reduce the error in method of stages. The models and the methodology are suitable for reliability and availability analysis of systems involving degradation which is very common in systems involving moving parts. These models are very suitable in accurately estimating the system reliability and availability which is very important in industry. The models conveniently cover the cases of degraded systems for which the model proposed by Hokstad and Frovig is not suitable. Research limitations/implications The models developed consider the systems where the repair phenomenon follows exponential and the failure mechanism follows Weibull with shape parameter greater than 1. Practical implications These models can be suitably used to deal with reliability and availability analysis of systems where the degradation process is non-exponential. Thus, the models can be practically used to meet the industrial requirement of accurately estimating the reliability and availability of degradable systems. Originality/value A unique approach is presented in this paper for estimating degraded failure rate in the method of stages which reduces the rounding error. The models presented for reliability and availability analyses can deal with degradable systems where the degradation process follows Weibull distribution, which is not possible with the model presented by Hokstad and Frovig.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Isupov, Konstantin, und Vladimir Knyazkov. „Multiple-precision matrix-vector multiplication on graphics processing units“. Program Systems: Theory and Applications 11, Nr. 3 (20.08.2020): 61–84. http://dx.doi.org/10.25209/2079-3316-2020-11-3-61-84.

Der volle Inhalt der Quelle
Annotation:
We are considering a parallel implementation of matrix-vector multiplication (GEMV, Level 2 of the BLAS) for graphics processing units (GPUs) using multiple-precision arithmetic based on the residue number system. In our GEMV implementation, element-wise operations with multiple-precision vectors and matrices consist of several parts, each of which is calculated by a separate CUDA kernel. This feature eliminates branch divergence when performing sequential parts of multiple-precision operations and allows the full utilization of the GPU’s resources. An efficient data structure for storing arrays with multiple-precision entries provides a coalesced access pattern to the GPU global memory. We have performed a rounding error analysis and derived error bounds for the proposed GEMV implementation. Experimental results show the high efficiency of the proposed solution compared to existing high-precision packages deployed on GPU.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Huang, Zhui Liang, Shu Wen Lin und Dao Ying Jiang. „High-Precision Positioning and Speed Smooth Transition Control Algorithm for Terminal Point of Micro-Line Segment“. Advanced Materials Research 305 (Juli 2011): 19–24. http://dx.doi.org/10.4028/www.scientific.net/amr.305.19.

Der volle Inhalt der Quelle
Annotation:
In allusion to the problem of data sampling interpolation error which exists in continuous micro-line segments interpolation algorithm with acceleration /deceleration control, this paper proposes a high-precision positioning and speed smooth transition control algorithm for terminal point of micro-line segment. The linear acceleration/deceleration mode was taken as an example to derive a motion control algorithm, which can eliminate the rounding error of the interpolation with fine adjusting the reserved parameters of acceleration /deceleration process when micro-line segment starting point and terminal point velocity are not zero. Simulation analysis and experiments of motion control shows that the algorithm greatly improves the control accuracy of the joint point position of micro-line segments, achieves the continuous smooth transition at the feed speed of the joint point of adjacent line segments and the uniformity of the velocity in specific line segment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie