Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Stochastic rounding.

Zeitschriftenartikel zum Thema „Stochastic rounding“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-29 Zeitschriftenartikel für die Forschung zum Thema "Stochastic rounding" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Paxton, E. Adam, Matthew Chantry, Milan Klöwer, Leo Saffin und Tim Palmer. „Climate Modeling in Low Precision: Effects of Both Deterministic and Stochastic Rounding“. Journal of Climate 35, Nr. 4 (15.02.2022): 1215–29. http://dx.doi.org/10.1175/jcli-d-21-0343.1.

Der volle Inhalt der Quelle
Annotation:
Abstract Motivated by recent advances in operational weather forecasting, we study the efficacy of low-precision arithmetic for climate simulations. We develop a framework to measure rounding error in a climate model, which provides a stress test for a low-precision version of the model, and we apply our method to a variety of models including the Lorenz system, a shallow water approximation for flow over a ridge, and a coarse-resolution spectral global atmospheric model with simplified parameterizations (SPEEDY). Although double precision [52 significant bits (sbits)] is standard across operational climate models, in our experiments we find that single precision (23 sbits) is more than enough and that as low as half precision (10 sbits) is often sufficient. For example, SPEEDY can be run with 12 sbits across the code with negligible rounding error, and with 10 sbits if minor errors are accepted, amounting to less than 0.1 mm (6 h)−1 for average gridpoint precipitation, for example. Our test is based on the Wasserstein metric and this provides stringent nonparametric bounds on rounding error accounting for annual means as well as extreme weather events. In addition, by testing models using both round-to-nearest (RN) and stochastic rounding (SR) we find that SR can mitigate rounding error across a range of applications, and thus our results also provide some evidence that SR could be relevant to next-generation climate models. Further research is needed to test if our results can be generalized to higher resolutions and alternative numerical schemes. However, the results open a promising avenue toward the use of low-precision hardware for improved climate modeling. Significance Statement Weather and climate models provide vital information for decision-making, and will become ever more important in the future with a changed climate and more extreme weather. A central limitation to improved models are computational resources, which is why some weather forecasters have recently shifted from conventional 64-bit to more efficient 32-bit computations, which can provide equally accurate forecasts. Climate models, however, still compute in 64 bits, and adapting to lower precision requires a detailed analysis of rounding errors. We develop methods to quantify rounding error in a climate model, and find similar precision acceptable across weather and climate models, with even 16 bits often sufficient for an accurate climate. This opens a promising avenue for computational efficiency gains in climate modeling.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Connolly, Michael P., Nicholas J. Higham und Theo Mary. „Stochastic Rounding and Its Probabilistic Backward Error Analysis“. SIAM Journal on Scientific Computing 43, Nr. 1 (Januar 2021): A566—A585. http://dx.doi.org/10.1137/20m1334796.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Gupta, Anupam, R. Ravi und Amitabh Sinha. „LP Rounding Approximation Algorithms for Stochastic Network Design“. Mathematics of Operations Research 32, Nr. 2 (Mai 2007): 345–64. http://dx.doi.org/10.1287/moor.1060.0237.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Arciniega, Armando, und Edward Allen. „Rounding Error in Numerical Solution of Stochastic Differential Equations“. Stochastic Analysis and Applications 21, Nr. 2 (04.01.2003): 281–300. http://dx.doi.org/10.1081/sap-120019286.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Arar, El-Mehdi El, Devan Sohier, Pablo de Oliveira Castro und Eric Petit. „Stochastic Rounding Variance and Probabilistic Bounds: A New Approach“. SIAM Journal on Scientific Computing 45, Nr. 5 (05.10.2023): C255—C275. http://dx.doi.org/10.1137/22m1510819.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

McCarl, Bruce A. „Generalized Stochastic Dominance: An Empirical Examination“. Journal of Agricultural and Applied Economics 22, Nr. 2 (Dezember 1990): 49–55. http://dx.doi.org/10.1017/s1074070800001796.

Der volle Inhalt der Quelle
Annotation:
Abstract Use of generalized stochastic dominance (GSD) requires one to place lower and upper bounds on the risk aversion coefficient. This study showed that breakeven risk aversion coefficients found assuming the exponential utility function delineate the places where GSD preferences switch between prospects. However, between these break points, multiple, overlapping GSD intervals can be found. Consequently, when one does not have risk aversion coefficient information, discovery of breakeven coefficients instead of GSD use is recommended. The investigation also showed GSD results are insensitive to wealth and data scaling but are sensitive to rounding.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Hopkins, Michael, Mantas Mikaitis, Dave R. Lester und Steve Furber. „Stochastic rounding and reduced-precision fixed-point arithmetic for solving neural ordinary differential equations“. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 378, Nr. 2166 (20.01.2020): 20190052. http://dx.doi.org/10.1098/rsta.2019.0052.

Der volle Inhalt der Quelle
Annotation:
Although double-precision floating-point arithmetic currently dominates high-performance computing, there is increasing interest in smaller and simpler arithmetic types. The main reasons are potential improvements in energy efficiency and memory footprint and bandwidth. However, simply switching to lower-precision types typically results in increased numerical errors. We investigate approaches to improving the accuracy of reduced-precision fixed-point arithmetic types, using examples in an important domain for numerical computation in neuroscience: the solution of ordinary differential equations (ODEs). The Izhikevich neuron model is used to demonstrate that rounding has an important role in producing accurate spike timings from explicit ODE solution algorithms. In particular, fixed-point arithmetic with stochastic rounding consistently results in smaller errors compared to single-precision floating-point and fixed-point arithmetic with round-to-nearest across a range of neuron behaviours and ODE solvers. A computationally much cheaper alternative is also investigated, inspired by the concept of dither that is a widely understood mechanism for providing resolution below the least significant bit in digital signal processing. These results will have implications for the solution of ODEs in other subject areas, and should also be directly relevant to the huge range of practical problems that are represented by partial differential equations. This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Ji, Sai, Dachuan Xu, Donglei Du und Yijing Wang. „LP-rounding approximation algorithms for two-stage stochastic fault-tolerant facility location problem“. Applied Mathematical Modelling 58 (Juni 2018): 76–85. http://dx.doi.org/10.1016/j.apm.2017.12.009.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Tovissodé, Chénangnon Frédéric, Sèwanou Hermann Honfo, Jonas Têlé Doumatè und Romain Glèlè Kakaï. „On the Discretization of Continuous Probability Distributions Using a Probabilistic Rounding Mechanism“. Mathematics 9, Nr. 5 (06.03.2021): 555. http://dx.doi.org/10.3390/math9050555.

Der volle Inhalt der Quelle
Annotation:
Most existing flexible count distributions allow only approximate inference when used in a regression context. This work proposes a new framework to provide an exact and flexible alternative for modeling and simulating count data with various types of dispersion (equi-, under-, and over-dispersion). The new method, referred to as “balanced discretization”, consists of discretizing continuous probability distributions while preserving expectations. It is easy to generate pseudo random variates from the resulting balanced discrete distribution since it has a simple stochastic representation (probabilistic rounding) in terms of the continuous distribution. For illustrative purposes, we develop the family of balanced discrete gamma distributions that can model equi-, under-, and over-dispersed count data. This family of count distributions is appropriate for building flexible count regression models because the expectation of the distribution has a simple expression in terms of the parameters of the distribution. Using the Jensen–Shannon divergence measure, we show that under the equidispersion restriction, the family of balanced discrete gamma distributions is similar to the Poisson distribution. Based on this, we conjecture that while covering all types of dispersions, a count regression model based on the balanced discrete gamma distribution will allow recovering a near Poisson distribution model fit when the data are Poisson distributed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Чубич, Владимир Михайлович, und Светлана Олеговна Кулабухова. „Square-root algorithms for robust modifications of the continuous-discrete cubature Kalman filter“. Вычислительные технологии, Nr. 3 (15.07.2020): 88–98. http://dx.doi.org/10.25743/ict.2020.25.3.010.

Der volle Inhalt der Quelle
Annotation:
Предложены две устойчивые к ошибкам машинного округления и к аномальным данным квадратно-корневые модификации непрерывно-дискретного кубатурного фильтра Калмана, основанные на вариационном байесовском и коррентропийном подходах. Апробация разработанных алгоритмов на модельной задаче со случайным характером расположения аномальных наблюдений показала их работоспособность при сопоставимом качестве фильтрации. Подтверждена алгебраическая эквивалентность представленных квадратно-корневых и стандартных версий Rounding errors due to the finite length of machine word can significantly affect the quality of estimation and filtering when solving the corresponding problems in various subject areas. In this regard, to improve the reliability of the obtained results, it is advisable to develop and then apply square-root modifications of the used algorithms. Purpose: developing the square-root modifications of the continuous-discrete cubature Kalman filter on the basis of variational Bayesian and correntropy approaches. Methodology: matrix orthogonal QR decomposition. Findings: two robust (resistant to the possible presence of anomalous data and to machine rounding errors) modifications of the continuous-discrete cubature Kalman filter have been developed. The first (variational Bayesian) algorithm is obtained by extending the known discrete equations of the extrapolation stage to the continuous-discrete case. The second algorithm, based on the maximum correntropy criterion, is proposed in this paper for the first time. The developed square-root algorithms for nonlinear filtering are validated on the example of one stochastic dynamical system model with the random location of anomalous observations. In doing so, the filtering quality, estimated by the value of the accumulated mean square error, was quite comparable for both modifications during equivalent results obtained for the corresponding root-free analogues. Value: the proposed square-root versions of robust modifications of the continuous-discrete cubature Kalman filter are algebraically equivalent to their standard analogues. Meanwhile, positive definiteness and symmetry of covariance matrices of the state vector estimates at the extrapolation and the filtration stages are provided. The developed algorithms will be used to develop software and mathematical support for parametric identification of stochastic nonlinear continuous-discrete systems in the presence of anomalous observations in the measurement data
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Noeiaghdam, Samad, Aliona Dreglea, Jihuan He, Zakieh Avazzadeh, Muhammad Suleman, Mohammad Ali Fariborzi Araghi, Denis N. Sidorov und Nikolai Sidorov. „Error Estimation of the Homotopy Perturbation Method to Solve Second Kind Volterra Integral Equations with Piecewise Smooth Kernels: Application of the CADNA Library“. Symmetry 12, Nr. 10 (20.10.2020): 1730. http://dx.doi.org/10.3390/sym12101730.

Der volle Inhalt der Quelle
Annotation:
This paper studies the second kind linear Volterra integral equations (IEs) with a discontinuous kernel obtained from the load leveling and energy system problems. For solving this problem, we propose the homotopy perturbation method (HPM). We then discuss the convergence theorem and the error analysis of the formulation to validate the accuracy of the obtained solutions. In this study, the Controle et Estimation Stochastique des Arrondis de Calculs method (CESTAC) and the Control of Accuracy and Debugging for Numerical Applications (CADNA) library are used to control the rounding error estimation. We also take advantage of the discrete stochastic arithmetic (DSA) to find the optimal iteration, optimal error and optimal approximation of the HPM. The comparative graphs between exact and approximate solutions show the accuracy and efficiency of the method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Korzilius, Stan, und Berry Schoenmakers. „Divisions and Square Roots with Tight Error Analysis from Newton–Raphson Iteration in Secure Fixed-Point Arithmetic“. Cryptography 7, Nr. 3 (12.09.2023): 43. http://dx.doi.org/10.3390/cryptography7030043.

Der volle Inhalt der Quelle
Annotation:
In this paper, we present new variants of Newton–Raphson-based protocols for the secure computation of the reciprocal and the (reciprocal) square root. The protocols rely on secure fixed-point arithmetic with arbitrary precision parameterized by the total bit length of the fixed-point numbers and the bit length of the fractional part. We perform a rigorous error analysis aiming for tight accuracy claims while minimizing the overall cost of the protocols. Due to the nature of secure fixed-point arithmetic, we perform the analysis in terms of absolute errors. Whenever possible, we allow for stochastic (or probabilistic) rounding as an efficient alternative to deterministic rounding. We also present a new protocol for secure integer division based on our protocol for secure fixed-point reciprocals. The resulting protocol is parameterized by the bit length of the inputs and yields exact results for the integral quotient and remainder. The protocol is very efficient, minimizing the number of secure comparisons. Similarly, we present a new protocol for integer square roots based on our protocol for secure fixed-point square roots. The quadratic convergence of the Newton–Raphson method implies a logarithmic number of iterations as a function of the required precision (independent of the input value). The standard error analysis of the Newton–Raphson method focuses on the termination condition for attaining the required precision, assuming sufficiently precise floating-point arithmetic. We perform an intricate error analysis assuming fixed-point arithmetic of minimal precision throughout and minimizing the number of iterations in the worst case.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Carling, Paul A. „Coevolving edge rounding and shape of glacial erratics: the case of Shap granite, UK“. Earth Surface Dynamics 12, Nr. 1 (26.02.2024): 381–97. http://dx.doi.org/10.5194/esurf-12-381-2024.

Der volle Inhalt der Quelle
Annotation:
Abstract. The size distributions and the shapes of detrital rock clasts can shed light on the environmental history of the clast assemblages and the processes responsible for clast comminution. For example, mechanical fracture due to the stresses imposed on a basal rock surface by a body of flowing glacial ice releases initial “parent” shapes of large blocks of rock from an outcrop, which then are modified by the mechanics of abrasion and fracture during subglacial transport. The latter processes produce subsequent generations of shapes, possibly distinct in form from the parent blocks. A complete understanding of both the processes responsible for block shape changes and the trends in shape adjustment with time and distance away from the source outcrop is lacking. Field data on edge rounding and shape changes of Shap granite blocks (dispersed by Devensian ice eastwards from the outcrop) are used herein to explore the systematic changes in block form with distance from the outcrop. The degree of edge rounding for individual blocks increases in a punctuated fashion with the distance from the outcrop as blocks fracture repeatedly to introduce new fresh unrounded edges. In contrast, block shape is conservative, with parent blocks fracturing to produce self-similar “child” shapes with distance. Measured block shapes evolve in accord with two well-known models for block fracture mechanics – (1) stochastic and (2) silver ratio models – towards one or the other of these two attractor states. Progressive reduction in block size, in accord with fracture mechanics, reflects the fact that most blocks were transported at the sole of the ice mass and were subject to the compressive and tensile forces of the ice acting on the stoss surfaces of blocks lying against a bedrock or till surface. The interpretations might apply to a range of homogeneous hard rock lithologies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Borukaiev, Z. Kh, V. A. Evdokimov und K. B. Ostapchenko. „Construction of the Multi-Agent Environment Architecture of the Pricing Process Simulation Model in the Electricity Market“. Èlektronnoe modelirovanie 45, Nr. 6 (10.12.2023): 15–30. http://dx.doi.org/10.15407/emodel.45.06.015.

Der volle Inhalt der Quelle
Annotation:
The question of building the architecture of the multi-agent environment of the simulation model of the pricing process, as a space of heterogeneous interconnected organizational, infor-mational, technological and economic interactions of the simulated agents of the pricing pro-cess, is considered.Using the example of a complex organizational and technical system (COTS) of the electricity micro-market in local electric power systems, the set of agents sur-rounding them and ensuring the vital activity of the COTS of pricing is formalized, consisting of classified internal agents and environmental agents with a definition of their functional pur-pose. It was established that a set of partially observable influence factors of subjects of the electricity micro-market external environment are additionally formalized in the multi-agent pricing system as communication agents with stochastic, dynamic. but with discrete fixation of distinct states of observation processes in this environment. As a result, the simulation model of the pricing process is presented as a heterogeneous distributed multi-agent system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Qi, Cheng, Junwei Xie, Haowei Zhang, Zihang Ding und Xiao Yang. „Optimal Configuration of Array Elements for Hybrid Distributed PA-MIMO Radar System Based on Target Detection“. Remote Sensing 14, Nr. 17 (23.08.2022): 4129. http://dx.doi.org/10.3390/rs14174129.

Der volle Inhalt der Quelle
Annotation:
This paper establishes a hybrid distributed phased array multiple-input multiple-output (PA-MIMO) radar system model to improve the target detection performance by combining coherent processing gain and spatial diversity gain. First, the radar system signal model and array space configuration model for the PA-MIMO radar are established. Then, a novel likelihood ratio test (LRT) detector is derived based on the Neyman–Pearson (NP) criterion in a fixed noise background. It can jointly optimize the coherent processing gain and spatial diversity gain of the system by implementing subarray level and array element level optimal configuration at both receiver and transmitter ends in a uniform blocking manner. On this basis, three typical optimization problems are discussed from three aspects, i.e., the detection probability, the effective radar range, and the radar system equipment volume. The approximate closed-form solutions of them are constructed and solved by the proposed quantum particle swarm optimization-based stochastic rounding (SR-QPSO) algorithm. Through the simulations, it is verified that the proposed optimal configuration of the hybrid distributed PA-MIMO radar system offers substantial improvements compared to the other typical radar systems, detection probability of 0.98, and an effective range of 1166.3 km, which significantly improves the detection performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Heavey, Jack, Jiaming Cui, Chen Chen, B. Aditya Prakash und Anil Vullikanti. „Provable Sensor Sets for Epidemic Detection over Networks with Minimum Delay“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 9 (28.06.2022): 10202–9. http://dx.doi.org/10.1609/aaai.v36i9.21260.

Der volle Inhalt der Quelle
Annotation:
The efficient detection of outbreaks and other cascading phenomena is a fundamental problem in a number of domains, including disease spread, social networks, and infrastructure networks. In such settings, monitoring and testing a small group of pre-selected nodes from the susceptible population (i.e., a sensor set) is often the preferred testing regime. We study the problem of selecting a sensor set that minimizes the delay in detection---we refer to this as the MinDelSS problem. Prior methods for minimizing the detection time rely on greedy algorithms using submodularity. We show that this approach can sometimes lead to a worse approximation for minimizing the detection time than desired. We also show that MinDelSS is hard to approximate within an O(n^(1-1/g))-factor for any constant g greater than or equal to 2 for a graph with n nodes. This instead motivates seeking a bicriteria approximations. We present the algorithm RoundSensor, which gives a rigorous worst case O(log(n))-factor for the detection time, while violating the budget by a factor of O(log^2(n)). Our algorithm is based on the sample average approximation technique from stochastic optimization, combined with linear programming and rounding. We evaluate our algorithm on several networks, including hospital contact networks, which validates its effectiveness in real settings.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Cowan, Wesley, und Michael N. Katehakis. „MULTI-ARMED BANDITS UNDER GENERAL DEPRECIATION AND COMMITMENT“. Probability in the Engineering and Informational Sciences 29, Nr. 1 (10.10.2014): 51–76. http://dx.doi.org/10.1017/s0269964814000217.

Der volle Inhalt der Quelle
Annotation:
Generally, the multi-armed has been studied under the setting that at each time step over an infinite horizon a controller chooses to activate a single process or bandit out of a finite collection of independent processes (statistical experiments, populations, etc.) for a single period, receiving a reward that is a function of the activated process, and in doing so advancing the chosen process. Classically, rewards are discounted by a constant factor β∈(0, 1) per round.In this paper, we present a solution to the problem, with potentially non-Markovian, uncountable state space reward processes, under a framework in which, first, the discount factors may be non-uniform and vary over time, and second, the periods of activation of each bandit may be not be fixed or uniform, subject instead to a possibly stochastic duration of activation before a change to a different bandit is allowed. The solution is based on generalized restart-in-state indices, and it utilizes a view of the problem not as “decisions over state space” but rather “decisions over time”.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Croci, Matteo, Massimiliano Fasi, Nicholas J. Higham, Theo Mary und Mantas Mikaitis. „Stochastic rounding: implementation, error analysis and applications“. Royal Society Open Science 9, Nr. 3 (März 2022). http://dx.doi.org/10.1098/rsos.211631.

Der volle Inhalt der Quelle
Annotation:
Stochastic rounding (SR) randomly maps a real number x to one of the two nearest values in a finite precision number system. The probability of choosing either of these two numbers is 1 minus their relative distance to x . This rounding mode was first proposed for use in computer arithmetic in the 1950s and it is currently experiencing a resurgence of interest. If used to compute the inner product of two vectors of length n in floating-point arithmetic, it yields an error bound with constant n u with high probability, where u is the unit round-off. This is not necessarily the case for round to nearest (RN), for which the worst-case error bound has constant nu . A particular attraction of SR is that, unlike RN, it is immune to the phenomenon of stagnation, whereby a sequence of tiny updates to a relatively large quantity is lost. We survey SR by discussing its mathematical properties and probabilistic error analysis, its implementation, and its use in applications, with a focus on machine learning and the numerical solution of differential equations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Rukundo, Olivier, und Samuel Emil Schmidt. „Stochastic Rounding for Image Interpolation and Scan Conversion“. International Journal of Advanced Computer Science and Applications 13, Nr. 3 (2022). http://dx.doi.org/10.14569/ijacsa.2022.0130303.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Shah, Tapan. „Competence region estimation for black-box surrogate models“. International FLAIRS Conference Proceedings 34, Nr. 1 (18.04.2021). http://dx.doi.org/10.32473/flairs.v34i1.128571.

Der volle Inhalt der Quelle
Annotation:
With advances in edge applications for industry andhealthcare, machine learning models are increasinglytrained on the edge. However, storage and memory in-frastructure at the edge are often primitive, due to costand real-estate constraints. A simple, effective methodis to learn machine learning models from quantized datastored with low arithmetic precision (1-8 bits). In thiswork, we introduce two stochastic quantization meth-ods, dithering and stochastic rounding. In dithering, ad-ditive noise from a uniform distribution is added tothe sample before quantization. In stochastic rounding,each sample is quantized to the upper level with prob-ability p and to a lower level with probability 1-p. Thekey contributions of the paper are For 3 standard machine learning models, Support Vec-tor Machines, Decision Trees and Linear (Logistic)Regression, we compare the performance loss for astandard static quantization and stochastic quantiza-tion for 55 classification and 30 regression datasetswith 1-8 bits quantization. We showcase that for 4- and 8-bit quantization overregression datasets, stochastic quantization demon-strates statistically significant improvement. We investigate the performance loss as a function ofdataset attributes viz. number of features, standard de-viation, skewness. This helps create a transfer functionwhich will recommend the best quantizer for a givendataset. We propose 2 future research areas, a) dynamic quan-tizer update where the model is trained using stream-ing data and the quantizer is updated after each batchand b) precision re-allocation under budget constraintswhere different precision is used for different features.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Choi, Ji In, Madeleine Georges, Jung Ah Shin, Olivia Wang, Tiffany Zhu und Tapan Shah. „Learning from low precision samples“. International FLAIRS Conference Proceedings 34, Nr. 1 (18.04.2021). http://dx.doi.org/10.32473/flairs.v34i1.128568.

Der volle Inhalt der Quelle
Annotation:
With advances in edge applications in industry and healthcare, machine learning models are increasingly trained on the edge. However, storage and memory infrastructure at the edge are often primitive, due to cost and real-estate constraints.A simple, effective method is to learn machine learning models from quantized data stored with low arithmetic precision (1-8 bits).In this work, we introduce two stochastic quantization methods, dithering and stochastic rounding. In dithering, additive noise from a uniform distribution is added to the sample before quantization. In stochastic rounding, each sample is quantized to the upper level with probability p and to a lower level with probability 1-p.The key contributions of the paper are as follows: For 3 standard machine learning models, Support Vector Machines, Decision Trees and Linear (Logistic) Regression, we compare the performance loss for a standard static quantization and stochastic quantization for 55 classification and 30 regression datasets with 1-8 bits quantization. We showcase that for 4- and 8-bit quantization over regression datasets, stochastic quantization demonstrates statistically significant improvement. We investigate the performance loss as a function of dataset attributes viz. number of features, standard deviation, skewness. This helps create a transfer function which will recommend the best quantizer for a given dataset. We propose 2 future research areas, dynamic quantizer update where the model is trained using streaming data and the quantizer is updated after each batch and precision re-allocation under budget constraints where different precision is used for different features.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Kimpson, Tom, E. Adam Paxton, Matthew Chantry und Tim Palmer. „Climate change modelling at reduced floating‐point precision with stochastic rounding“. Quarterly Journal of the Royal Meteorological Society, 17.02.2023. http://dx.doi.org/10.1002/qj.4435.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Akanuma, Takashi, Cong Chen, Tetsuo Sato, Roeland M. H. Merks und Thomas N. Sato. „Memory of cell shape biases stochastic fate decision-making despite mitotic rounding“. Nature Communications 7, Nr. 1 (28.06.2016). http://dx.doi.org/10.1038/ncomms11963.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Gupta, Anupam, Amit Kumar, Viswanath Nagarajan und Xiangkun Shen. „Stochastic Load Balancing on Unrelated Machines“. Mathematics of Operations Research, 24.08.2020. http://dx.doi.org/10.1287/moor.2019.1049.

Der volle Inhalt der Quelle
Annotation:
We consider the problem of makespan minimization on unrelated machines when job sizes are stochastic. The goal is to find a fixed assignment of jobs to machines, to minimize the expected value of the maximum load over all the machines. For the identical-machines special case when the size of a job is the same across all machines, a constant-factor approximation algorithm has long been known. Our main result is the first constant-factor approximation algorithm for the general case of unrelated machines. This is achieved by (i) formulating a lower bound using an exponential-size linear program that is efficiently computable and (ii) rounding this linear program while satisfying only a specific subset of the constraints that still suffice to bound the expected makespan. We also consider two generalizations. The first is the budgeted makespan minimization problem, where the goal is to minimize the expected makespan subject to scheduling a target number (or reward) of jobs. We extend our main result to obtain a constant-factor approximation algorithm for this problem. The second problem involves q-norm objectives, where we want to minimize the expected q-norm of the machine loads. Here we give an [Formula: see text]-approximation algorithm, which is a constant-factor approximation for any fixed q.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Abdelhameed, Esam H., Samah Abdelraheem, Yehia Sayed Mohamed und Ahmed A. Zaki Diab. „Effective hybrid search technique based constraint mixed-integer programming for smart home residential load scheduling“. Scientific Reports 13, Nr. 1 (10.12.2023). http://dx.doi.org/10.1038/s41598-023-48717-x.

Der volle Inhalt der Quelle
Annotation:
AbstractIn this paper, the problem of scheduling smart homes (SHs) residential loads is considered aiming to minimize electricity bills and enhance the user comfort. The problem is addressed as a multi-objective constraint mixed-integer optimization problem (CP-MIP) to model the constrained load operation. As the CP-MIP optimization problem is non-convex, a novel hybrid search technique, that combines the Relaxation and Rounding (RnR) approach and metaheuristic algorithms to enhance the accuracy and relevance of decision variables, is proposed. This search technique is implemented through two stages: the relaxation stage in which a metaheuristic technique is applied to get the optimal rational solution of the problem. Whereas, the second stage is the rounding process which is applied via stochastic rounding approach to provide a good-enough feasible solution. The scheduling process has been done under time-of-use (ToU) dynamic electricity pricing scheme and two powering modes (i.e., powering from the main grid only or powering from a grid-tied photovoltaic (PV) residential power system), in addition, four metaheuristics [i.e., Binary Particle Swarm Optimization (BPSO), Self-Organizing Hierarchical PSO (SOH-PSO), JAYA algorithm, and Comprehensive Learning JAYA algorithm (CL-JAYA)] have been utilized. The results reported in this study verify the effectiveness of the proposed technique. In the 1st powering mode, the electricity bill reduction reaches 19.4% and 20.0% when applying the modified metaheuristics, i.e. SOH-PSO and CL-JAYA, respectively, while reaches 56.1%, and 54.7% respectively in the 2nd powering scenario. In addition, CL-JAYA superiority is also observed with regard to the user comfort.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Michaelis, Carlo, Andrew B. Lehr, Winfried Oed und Christian Tetzlaff. „Brian2Loihi: An emulator for the neuromorphic chip Loihi using the spiking neural network simulator Brian“. Frontiers in Neuroinformatics 16 (09.11.2022). http://dx.doi.org/10.3389/fninf.2022.1015624.

Der volle Inhalt der Quelle
Annotation:
Developing intelligent neuromorphic solutions remains a challenging endeavor. It requires a solid conceptual understanding of the hardware's fundamental building blocks. Beyond this, accessible and user-friendly prototyping is crucial to speed up the design pipeline. We developed an open source Loihi emulator based on the neural network simulator Brian that can easily be incorporated into existing simulation workflows. We demonstrate errorless Loihi emulation in software for a single neuron and for a recurrently connected spiking neural network. On-chip learning is also reviewed and implemented, with reasonable discrepancy due to stochastic rounding. This work provides a coherent presentation of Loihi's computational unit and introduces a new, easy-to-use Loihi prototyping package with the aim to help streamline conceptualization and deployment of new algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Zhao, Junyun, Siyuan Huang, Osama Yousuf, Yutong Gao, Brian D. Hoskins und Gina C. Adam. „Gradient Decomposition Methods for Training Neural Networks With Non-ideal Synaptic Devices“. Frontiers in Neuroscience 15 (22.11.2021). http://dx.doi.org/10.3389/fnins.2021.749811.

Der volle Inhalt der Quelle
Annotation:
While promising for high-capacity machine learning accelerators, memristor devices have non-idealities that prevent software-equivalent accuracies when used for online training. This work uses a combination of Mini-Batch Gradient Descent (MBGD) to average gradients, stochastic rounding to avoid vanishing weight updates, and decomposition methods to keep the memory overhead low during mini-batch training. Since the weight update has to be transferred to the memristor matrices efficiently, we also investigate the impact of reconstructing the gradient matrixes both internally (rank-seq) and externally (rank-sum) to the memristor array. Our results show that streaming batch principal component analysis (streaming batch PCA) and non-negative matrix factorization (NMF) decomposition algorithms can achieve near MBGD accuracy in a memristor-based multi-layer perceptron trained on the MNIST (Modified National Institute of Standards and Technology) database with only 3 to 10 ranks at significant memory savings. Moreover, NMF rank-seq outperforms streaming batch PCA rank-seq at low-ranks making it more suitable for hardware implementation in future memristor-based accelerators.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Ghenaiet, Adel. „Study of Sand Particle Trajectories and Erosion Into the First Compression Stage of a Turbofan“. Journal of Turbomachinery 134, Nr. 5 (24.05.2012). http://dx.doi.org/10.1115/1.4004750.

Der volle Inhalt der Quelle
Annotation:
Aero-engines operating in dusty environments are subject to ingestion of erodent particles leading to erosion damage of blades and a permanent drop in performance. This work concerns the study of particle dynamics and erosion of the front compression stage of a commercial turbofan. Particle trajectories simulations used a stochastic Lagrangian tracking code that solves the equations of motion separately from the airflow in a stepwise manner, while the tracking of particles in different cells is based on the finite element method. As the locations of impacts and rates of erosion were predicted, the subsequent geometry deteriorations were assessed. The number of particles, sizes, and initial positions were specified conformed to sand particle distribution (MIL-E5007E, 0-1000 micrometers) and concentrations 50–700 mg/m3. The results show that the IGV blade is mainly eroded over the leading edge and near hub and shroud; also the rotor blade has a noticeable erosion of the leading and trailing edges and a rounding of the blade tip corners, whereas in the diffuser, erosion is shown to spread over the blade surfaces in addition to the leading edge and trailing edge.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Klöwer, Milan, Peter V. Coveney, E. Adam Paxton und Tim N. Palmer. „Periodic orbits in chaotic systems simulated at low precision“. Scientific Reports 13, Nr. 1 (14.07.2023). http://dx.doi.org/10.1038/s41598-023-37004-4.

Der volle Inhalt der Quelle
Annotation:
AbstractNon-periodic solutions are an essential property of chaotic dynamical systems. Simulations with deterministic finite-precision numbers, however, always yield orbits that are eventually periodic. With 64-bit double-precision floating-point numbers such periodic orbits are typically negligible due to very long periods. The emerging trend to accelerate simulations with low-precision numbers, such as 16-bit half-precision floats, raises questions on the fidelity of such simulations of chaotic systems. Here, we revisit the 1-variable logistic map and the generalised Bernoulli map with various number formats and precisions: floats, posits and logarithmic fixed-point. Simulations are improved with higher precision but stochastic rounding prevents periodic orbits even at low precision. For larger systems the performance gain from low-precision simulations is often reinvested in higher resolution or complexity, increasing the number of variables. In the Lorenz 1996 system, the period lengths of orbits increase exponentially with the number of variables. Moreover, invariant measures are better approximated with an increased number of variables than with increased precision. Extrapolating to large simulations of natural systems, such as million-variable climate models, periodic orbit lengths are far beyond reach of present-day computers. Such orbits are therefore not expected to be problematic compared to high-precision simulations but the deviation of both from the continuum solution remains unclear.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie