Articles de revues sur le sujet « Stochastic second order methods »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Stochastic second order methods.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Stochastic second order methods ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Burrage, Kevin, Ian Lenane et Grant Lythe. « Numerical Methods for Second‐Order Stochastic Differential Equations ». SIAM Journal on Scientific Computing 29, no 1 (janvier 2007) : 245–64. http://dx.doi.org/10.1137/050646032.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Tocino, A., et J. Vigo-Aguiar. « Weak Second Order Conditions for Stochastic Runge--Kutta Methods ». SIAM Journal on Scientific Computing 24, no 2 (janvier 2002) : 507–23. http://dx.doi.org/10.1137/s1064827501387814.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Komori, Yoshio. « Weak second-order stochastic Runge–Kutta methods for non-commutative stochastic differential equations ». Journal of Computational and Applied Mathematics 206, no 1 (septembre 2007) : 158–73. http://dx.doi.org/10.1016/j.cam.2006.06.006.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Tang, Xiao, et Aiguo Xiao. « Efficient weak second-order stochastic Runge–Kutta methods for Itô stochastic differential equations ». BIT Numerical Mathematics 57, no 1 (26 avril 2016) : 241–60. http://dx.doi.org/10.1007/s10543-016-0618-9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Moxnes, John F., et Kjell Hausken. « Introducing Randomness into First-Order and Second-Order Deterministic Differential Equations ». Advances in Mathematical Physics 2010 (2010) : 1–42. http://dx.doi.org/10.1155/2010/509326.

Texte intégral
Résumé :
We incorporate randomness into deterministic theories and compare analytically and numerically some well-known stochastic theories: the Liouville process, the Ornstein-Uhlenbeck process, and a process that is Gaussian and exponentially time correlated (Ornstein-Uhlenbeck noise). Different methods of achieving the marginal densities for correlated and uncorrelated noise are discussed. Analytical results are presented for a deterministic linear friction force and a stochastic force that is uncorrelated or exponentially correlated.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Rößler, Andreas. « Second Order Runge–Kutta Methods for Itô Stochastic Differential Equations ». SIAM Journal on Numerical Analysis 47, no 3 (janvier 2009) : 1713–38. http://dx.doi.org/10.1137/060673308.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Rößler, Andreas. « Second order Runge–Kutta methods for Stratonovich stochastic differential equations ». BIT Numerical Mathematics 47, no 3 (12 mai 2007) : 657–80. http://dx.doi.org/10.1007/s10543-007-0130-3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Wang, Xiao, et Hongchao Zhang. « Inexact proximal stochastic second-order methods for nonconvex composite optimization ». Optimization Methods and Software 35, no 4 (15 janvier 2020) : 808–35. http://dx.doi.org/10.1080/10556788.2020.1713128.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Abdulle, Assyr, Gilles Vilmart et Konstantinos C. Zygalakis. « Weak Second Order Explicit Stabilized Methods for Stiff Stochastic Differential Equations ». SIAM Journal on Scientific Computing 35, no 4 (janvier 2013) : A1792—A1814. http://dx.doi.org/10.1137/12088954x.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Komori, Yoshio, et Kevin Burrage. « Weak second order S-ROCK methods for Stratonovich stochastic differential equations ». Journal of Computational and Applied Mathematics 236, no 11 (mai 2012) : 2895–908. http://dx.doi.org/10.1016/j.cam.2012.01.033.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Rathinasamy, Anandaraman, Davood Ahmadian et Priya Nair. « Second-order balanced stochastic Runge–Kutta methods with multi-dimensional studies ». Journal of Computational and Applied Mathematics 377 (octobre 2020) : 112890. http://dx.doi.org/10.1016/j.cam.2020.112890.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
12

Zhang, Jianling. « Multi-sample test based on bootstrap methods for second order stochastic dominance ». Hacettepe Journal of Mathematics and Statistics 44, no 13 (11 octobre 2014) : 1. http://dx.doi.org/10.15672/hjms.2014137464.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
13

Komori, Yoshio, David Cohen et Kevin Burrage. « Weak Second Order Explicit Exponential Runge--Kutta Methods for Stochastic Differential Equations ». SIAM Journal on Scientific Computing 39, no 6 (janvier 2017) : A2857—A2878. http://dx.doi.org/10.1137/15m1041341.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
14

Khodabin, M., K. Maleknejad, M. Rostami et M. Nouri. « Numerical solution of stochastic differential equations by second order Runge–Kutta methods ». Mathematical and Computer Modelling 53, no 9-10 (mai 2011) : 1910–20. http://dx.doi.org/10.1016/j.mcm.2011.01.018.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Yang, Jie, Weidong Zhao et Tao Zhou. « Explicit Deferred Correction Methods for Second-Order Forward Backward Stochastic Differential Equations ». Journal of Scientific Computing 79, no 3 (3 janvier 2019) : 1409–32. http://dx.doi.org/10.1007/s10915-018-00896-w.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
16

Abukhaled, Marwan I., et Edward J. Allen. « EXPECTATION STABILITY OF SECOND-ORDER WEAK NUMERICAL METHODS FOR STOCHASTIC DIFFERENTIAL EQUATIONS ». Stochastic Analysis and Applications 20, no 4 (28 août 2002) : 693–707. http://dx.doi.org/10.1081/sap-120006103.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
17

Alzalg, Baha. « Decomposition-based interior point methods for stochastic quadratic second-order cone programming ». Applied Mathematics and Computation 249 (décembre 2014) : 1–18. http://dx.doi.org/10.1016/j.amc.2014.10.015.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Cohen, David, et Magdalena Sigg. « Convergence analysis of trigonometric methods for stiff second-order stochastic differential equations ». Numerische Mathematik 121, no 1 (13 novembre 2011) : 1–29. http://dx.doi.org/10.1007/s00211-011-0426-8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
19

Komori, Yoshio, et Kevin Burrage. « Supplement : Efficient weak second order stochastic Runge–Kutta methods for non-commutative Stratonovich stochastic differential equations ». Journal of Computational and Applied Mathematics 235, no 17 (juillet 2011) : 5326–29. http://dx.doi.org/10.1016/j.cam.2011.04.021.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

Sabelfeld, Karl K., Dmitry Smirnov, Ivan Dimov et Venelin Todorov. « A global random walk on grid algorithm for second order elliptic equations ». Monte Carlo Methods and Applications 27, no 4 (27 octobre 2021) : 325–39. http://dx.doi.org/10.1515/mcma-2021-2097.

Texte intégral
Résumé :
Abstract In this paper we develop stochastic simulation methods for solving large systems of linear equations, and focus on two issues: (1) construction of global random walk algorithms (GRW), in particular, for solving systems of elliptic equations on a grid, and (2) development of local stochastic algorithms based on transforms to balanced transition matrix. The GRW method calculates the solution in any desired family of prescribed points of the gird in contrast to the classical stochastic differential equation based Feynman–Kac formula. The use in local random walk methods of balanced transition matrices considerably decreases the variance of the random estimators and hence decreases the computational cost in comparison with the conventional random walk on grids algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Xie, Chenghan, Chenxi Li, Chuwen Zhang, Qi Deng, Dongdong Ge et Yinyu Ye. « Trust Region Methods for Nonconvex Stochastic Optimization beyond Lipschitz Smoothness ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 14 (24 mars 2024) : 16049–57. http://dx.doi.org/10.1609/aaai.v38i14.29537.

Texte intégral
Résumé :
In many important machine learning applications, the standard assumption of having a globally Lipschitz continuous gradient may fail to hold. This paper delves into a more general (L0, L1)-smoothness setting, which gains particular significance within the realms of deep neural networks and distributionally robust optimization (DRO). We demonstrate the significant advantage of trust region methods for stochastic nonconvex optimization under such generalized smoothness assumption. We show that first-order trust region methods can recover the normalized and clipped stochastic gradient as special cases and then provide a unified analysis to show their convergence to first-order stationary conditions. Motivated by the important application of DRO, we propose a generalized high-order smoothness condition, under which second-order trust region methods can achieve a complexity of O(epsilon(-3.5)) for convergence to second-order stationary points. By incorporating variance reduction, the second-order trust region method obtains an even better complexity of O(epsilon(-3)), matching the optimal bound for standard smooth optimization. To our best knowledge, this is the first work to show convergence beyond the first-order stationary condition for generalized smooth optimization. Preliminary experiments show that our proposed algorithms perform favorably compared with existing methods.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Tang, Xiao, et Aiguo Xiao. « New explicit stabilized stochastic Runge-Kutta methods with weak second order for stiff Itô stochastic differential equations ». Numerical Algorithms 82, no 2 (25 octobre 2018) : 593–604. http://dx.doi.org/10.1007/s11075-018-0615-y.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Dentcheva, Darinka, et Andrzej Ruszczyński. « Inverse cutting plane methods for optimization problems with second-order stochastic dominance constraints ». Optimization 59, no 3 (avril 2010) : 323–38. http://dx.doi.org/10.1080/02331931003696350.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
24

Tocino, A. « Mean-square stability of second-order Runge–Kutta methods for stochastic differential equations ». Journal of Computational and Applied Mathematics 175, no 2 (mars 2005) : 355–67. http://dx.doi.org/10.1016/j.cam.2004.05.019.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Abukhaled, Marwan I. « Mean square stability of second-order weak numerical methods for stochastic differential equations ». Applied Numerical Mathematics 48, no 2 (février 2004) : 127–34. http://dx.doi.org/10.1016/j.apnum.2003.10.006.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Ghilli, Daria. « Viscosity methods for large deviations estimates of multiscale stochastic processes ». ESAIM : Control, Optimisation and Calculus of Variations 24, no 2 (26 janvier 2018) : 605–37. http://dx.doi.org/10.1051/cocv/2017051.

Texte intégral
Résumé :
We study singular perturbation problems for second order HJB equations in an unbounded setting. The main applications are large deviations estimates for the short maturity asymptotics of stochastic systems affected by a stochastic volatility, where the volatility is modelled by a process evolving at a faster time scale and satisfying some condition implying ergodicity.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Yousefi, Hassan, Seyed Shahram Ghorashi et Timon Rabczuk. « Directly Simulation of Second Order Hyperbolic Systems in Second Order Form via the Regularization Concept ». Communications in Computational Physics 20, no 1 (22 juin 2016) : 86–135. http://dx.doi.org/10.4208/cicp.101214.011015a.

Texte intégral
Résumé :
AbstractWe present an efficient and robust method for stress wave propagation problems (second order hyperbolic systems) having discontinuities directly in their second order form. Due to the numerical dispersion around discontinuities and lack of the inherent dissipation in hyperbolic systems, proper simulation of such problems are challenging. The proposed idea is to denoise spurious oscillations by a post-processing stage from solutions obtained from higher-order grid-based methods (e.g., high-order collocation or finite-difference schemes). The denoising is done so that the solutions remain higher-order (here, second order) around discontinuities and are still free from spurious oscillations. For this purpose, improved Tikhonov regularization approach is advised. This means to let data themselves select proper denoised solutions (since there is no pre-assumptions about regularized results). The improved approach can directly be done on uniform or non-uniform sampled data in a way that the regularized results maintenance continuous derivatives up to some desired order. It is shown how to improve the smoothing method so that it remains conservative and has local estimating feature. To confirm effectiveness of the proposed approach, finally, some one and two dimensional examples will be provided. It will be shown how both the numerical (artificial) dispersion and dissipation can be controlled around discontinuous solutions and stochastic-like results.
Styles APA, Harvard, Vancouver, ISO, etc.
28

ITKIN, ANDREY. « HIGH ORDER SPLITTING METHODS FOR FORWARD PDEs AND PIDEs ». International Journal of Theoretical and Applied Finance 18, no 05 (28 juillet 2015) : 1550031. http://dx.doi.org/10.1142/s0219024915500314.

Texte intégral
Résumé :
This paper is dedicated to the construction of high order (in both space and time) finite-difference schemes for both forward and backward PDEs and PIDEs, such that option prices obtained by solving both the forward and backward equations are consistent. This approach is partly inspired by Andreassen & Huge (2011) who reported a pair of consistent finite-difference schemes of first-order approximation in time for an uncorrelated local stochastic volatility (LSV) model. We extend their approach by constructing schemes that are second-order in both space and time and that apply to models with jumps and discrete dividends. Taking correlation into account in our approach is also not an issue.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Yousefi, Mahsa, et Ángeles Martínez. « Deep Neural Networks Training by Stochastic Quasi-Newton Trust-Region Methods ». Algorithms 16, no 10 (20 octobre 2023) : 490. http://dx.doi.org/10.3390/a16100490.

Texte intégral
Résumé :
While first-order methods are popular for solving optimization problems arising in deep learning, they come with some acute deficiencies. To overcome these shortcomings, there has been recent interest in introducing second-order information through quasi-Newton methods that are able to construct Hessian approximations using only gradient information. In this work, we study the performance of stochastic quasi-Newton algorithms for training deep neural networks. We consider two well-known quasi-Newton updates, the limited-memory Broyden–Fletcher–Goldfarb–Shanno (BFGS) and the symmetric rank one (SR1). This study fills a gap concerning the real performance of both updates in the minibatch setting and analyzes whether more efficient training can be obtained when using the more robust BFGS update or the cheaper SR1 formula, which—allowing for indefinite Hessian approximations—can potentially help to better navigate the pathological saddle points present in the non-convex loss functions found in deep learning. We present and discuss the results of an extensive experimental study that includes many aspects affecting performance, like batch normalization, the network architecture, the limited memory parameter or the batch size. Our results show that stochastic quasi-Newton algorithms are efficient and, in some instances, able to outperform the well-known first-order Adam optimizer, run with the optimal combination of its numerous hyperparameters, and the stochastic second-order trust-region STORM algorithm.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Abukhaled, M. I., et E. J. Allen. « A class of second-order Runge-Kutta methods for numerical solution of stochastic differential equations ». Stochastic Analysis and Applications 16, no 6 (janvier 1998) : 977–91. http://dx.doi.org/10.1080/07362999808809575.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
31

Rudolf, Gábor, et Andrzej Ruszczyński. « Optimization Problems with Second Order Stochastic Dominance Constraints : Duality, Compact Formulations, and Cut Generation Methods ». SIAM Journal on Optimization 19, no 3 (janvier 2008) : 1326–43. http://dx.doi.org/10.1137/070702473.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
32

Ahn, T. H., et A. Sandu. « Implicit Second Order Weak Taylor Tau-Leaping Methods for the Stochastic Simulation of Chemical Kinetics ». Procedia Computer Science 4 (2011) : 2297–306. http://dx.doi.org/10.1016/j.procs.2011.04.250.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Meskarian, Rudabeh, Huifu Xu et Jörg Fliege. « Numerical methods for stochastic programs with second order dominance constraints with applications to portfolio optimization ». European Journal of Operational Research 216, no 2 (janvier 2012) : 376–85. http://dx.doi.org/10.1016/j.ejor.2011.07.044.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
34

Lu, Lu, Yu Yuan, Heng Wang, Xing Zhao et Jianjie Zheng. « A New Second-Order Tristable Stochastic Resonance Method for Fault Diagnosis ». Symmetry 11, no 8 (1 août 2019) : 965. http://dx.doi.org/10.3390/sym11080965.

Texte intégral
Résumé :
Vibration signals are used to diagnosis faults of the rolling bearing which is symmetric structure. Stochastic resonance (SR) has been widely applied in weak signal feature extraction in recent years. It can utilize noise and enhance weak signals. However, the traditional SR method has poor performance, and it is difficult to determine parameters of SR. Therefore, a new second-order tristable SR method (STSR) based on a new potential combining the classical bistable potential with Woods-Saxon potential is proposed in this paper. Firstly, the envelope signal of rolling bearings is the input signal of STSR. Then, the output of signal-to-noise ratio (SNR) is used as the fitness function of the Seeker Optimization Algorithm (SOA) in order to optimize the parameters of SR. Finally, the optimal parameters are used to set the STSR system in order to enhance and extract weak signals of rolling bearings. Simulated and experimental signals are used to demonstrate the effectiveness of STSR. The diagnosis results show that the proposed STSR method can obtain higher output SNR and better filtering performance than the traditional SR methods. It provides a new idea for fault diagnosis of rotating machinery.
Styles APA, Harvard, Vancouver, ISO, etc.
35

PELLEGRINO, TOMMASO. « SECOND-ORDER STOCHASTIC VOLATILITY ASYMPTOTICS AND THE PRICING OF FOREIGN EXCHANGE DERIVATIVES ». International Journal of Theoretical and Applied Finance 23, no 03 (mai 2020) : 2050021. http://dx.doi.org/10.1142/s0219024920500211.

Texte intégral
Résumé :
We consider models for the pricing of foreign exchange derivatives, where the underlying asset volatility as well as the one for the foreign exchange rate are stochastic. Under this framework, singular perturbation methods have been used to derive first-order approximations for European option prices. In this paper, based on a previous result for the calibration and pricing of single underlying options, we derive the second-order approximation pricing formula in the two-dimensional case and we apply it to the pricing of foreign exchange options.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Rathinasamy, Anandaraman, et Priya Nair. « Asymptotic mean-square stability of weak second-order balanced stochastic Runge–Kutta methods for multi-dimensional Itô stochastic differential systems ». Applied Mathematics and Computation 332 (septembre 2018) : 276–303. http://dx.doi.org/10.1016/j.amc.2018.03.065.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
37

Namachchivaya, N. S., et Gerard Leng. « Equivalence of Stochastic Averaging and Stochastic Normal Forms ». Journal of Applied Mechanics 57, no 4 (1 décembre 1990) : 1011–17. http://dx.doi.org/10.1115/1.2897619.

Texte intégral
Résumé :
The equivalence of the methods of stochastic averaging and stochastic normal forms is demonstrated for systems under the effect of linear multiplicative and additive noise. It is shown that both methods lead to reduced systems with the same Markovian approximation. The key result is that the second-order stochastic terms have to be retained in the normal form computation. Examples showing applications to systems undergoing divergence and flutter instability are provided. Furthermore, it is shown that unlike stochastic averaging, stochastic normal forms can be used in the analysis of nilpotent systems to eliminate the stable modes. Finally, some results pertaining to stochastic Lorenz equations are presented.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Zhou, Jingcheng, Wei Wei, Ruizhi Zhang et Zhiming Zheng. « Damped Newton Stochastic Gradient Descent Method for Neural Networks Training ». Mathematics 9, no 13 (29 juin 2021) : 1533. http://dx.doi.org/10.3390/math9131533.

Texte intégral
Résumé :
First-order methods such as stochastic gradient descent (SGD) have recently become popular optimization methods to train deep neural networks (DNNs) for good generalization; however, they need a long training time. Second-order methods which can lower the training time are scarcely used on account of their overpriced computing cost to obtain the second-order information. Thus, many works have approximated the Hessian matrix to cut the cost of computing while the approximate Hessian matrix has large deviation. In this paper, we explore the convexity of the Hessian matrix of partial parameters and propose the damped Newton stochastic gradient descent (DN-SGD) method and stochastic gradient descent damped Newton (SGD-DN) method to train DNNs for regression problems with mean square error (MSE) and classification problems with cross-entropy loss (CEL). In contrast to other second-order methods for estimating the Hessian matrix of all parameters, our methods only accurately compute a small part of the parameters, which greatly reduces the computational cost and makes the convergence of the learning process much faster and more accurate than SGD and Adagrad. Several numerical experiments on real datasets were performed to verify the effectiveness of our methods for regression and classification problems.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Huang, Xunpeng, Xianfeng Liang, Zhengyang Liu, Lei Li, Yue Yu et Yitan Li. « SPAN : A Stochastic Projected Approximate Newton Method ». Proceedings of the AAAI Conference on Artificial Intelligence 34, no 02 (3 avril 2020) : 1520–27. http://dx.doi.org/10.1609/aaai.v34i02.5511.

Texte intégral
Résumé :
Second-order optimization methods have desirable convergence properties. However, the exact Newton method requires expensive computation for the Hessian and its inverse. In this paper, we propose SPAN, a novel approximate and fast Newton method. SPAN computes the inverse of the Hessian matrix via low-rank approximation and stochastic Hessian-vector products. Our experiments on multiple benchmark datasets demonstrate that SPAN outperforms existing first-order and second-order optimization methods in terms of the convergence wall-clock time. Furthermore, we provide a theoretical analysis of the per-iteration complexity, the approximation error, and the convergence rate. Both the theoretical analysis and experimental results show that our proposed method achieves a better trade-off between the convergence rate and the per-iteration efficiency.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Leimkuhler, B., C. Matthews et M. V. Tretyakov. « On the long-time integration of stochastic gradient systems ». Proceedings of the Royal Society A : Mathematical, Physical and Engineering Sciences 470, no 2170 (8 octobre 2014) : 20140120. http://dx.doi.org/10.1098/rspa.2014.0120.

Texte intégral
Résumé :
This article addresses the weak convergence of numerical methods for Brownian dynamics. Typical analyses of numerical methods for stochastic differential equations focus on properties such as the weak order which estimates the asymptotic (stepsize h → 0 ) convergence behaviour of the error of finite-time averages. Recently, it has been demonstrated, by study of Fokker–Planck operators, that a non-Markovian numerical method generates approximations in the long-time limit with higher accuracy order (second order) than would be expected from its weak convergence analysis (finite-time averages are first-order accurate). In this article, we describe the transition from the transient to the steady-state regime of this numerical method by estimating the time-dependency of the coefficients in an asymptotic expansion for the weak error, demonstrating that the convergence to second order is exponentially rapid in time. Moreover, we provide numerical tests of the theory, including comparisons of the efficiencies of the Euler–Maruyama method, the popular second-order Heun method, and the non-Markovian method.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Rathinasamy, A., et K. Balachandran. « Mean-square stability of second-order Runge–Kutta methods for multi-dimensional linear stochastic differential systems ». Journal of Computational and Applied Mathematics 219, no 1 (septembre 2008) : 170–97. http://dx.doi.org/10.1016/j.cam.2007.07.019.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Tang, Xiao, et Aiguo Xiao. « Asymptotically optimal approximation of some stochastic integrals and its applications to the strong second-order methods ». Advances in Computational Mathematics 45, no 2 (24 octobre 2018) : 813–46. http://dx.doi.org/10.1007/s10444-018-9638-0.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Liu, Yan, Maojun Zhang, Zhiwei Zhong et Xiangrong Zeng. « AdaCN : An Adaptive Cubic Newton Method for Nonconvex Stochastic Optimization ». Computational Intelligence and Neuroscience 2021 (10 novembre 2021) : 1–11. http://dx.doi.org/10.1155/2021/5790608.

Texte intégral
Résumé :
In this work, we introduce AdaCN, a novel adaptive cubic Newton method for nonconvex stochastic optimization. AdaCN dynamically captures the curvature of the loss landscape by diagonally approximated Hessian plus the norm of difference between previous two estimates. It only requires at most first order gradients and updates with linear complexity for both time and memory. In order to reduce the variance introduced by the stochastic nature of the problem, AdaCN hires the first and second moment to implement and exponential moving average on iteratively updated stochastic gradients and approximated stochastic Hessians, respectively. We validate AdaCN in extensive experiments, showing that it outperforms other stochastic first order methods (including SGD, Adam, and AdaBound) and stochastic quasi-Newton method (i.e., Apollo), in terms of both convergence speed and generalization performance.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Tas, Oktay, Farshad Mirzazadeh Barijough et Umut Ugurlu. « A TEST OF SECOND-ORDER STOCHASTIC DOMINANCE WITH DIFFERENT WEIGHTING METHODS : EVIDENCE FROM BIST-30 and DJIA ». Pressacademia 4, no 4 (23 décembre 2015) : 723. http://dx.doi.org/10.17261/pressacademia.2015414538.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
45

Vilmart, Gilles. « Weak Second Order Multirevolution Composition Methods for Highly Oscillatory Stochastic Differential Equations with Additive or Multiplicative Noise ». SIAM Journal on Scientific Computing 36, no 4 (janvier 2014) : A1770—A1796. http://dx.doi.org/10.1137/130935331.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
46

Debrabant, Kristian, et Andreas Rößler. « Families of efficient second order Runge–Kutta methods for the weak approximation of Itô stochastic differential equations ». Applied Numerical Mathematics 59, no 3-4 (mars 2009) : 582–94. http://dx.doi.org/10.1016/j.apnum.2008.03.012.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
47

LÜTKEBOHMERT, EVA, et LYDIENNE MATCHIE. « VALUE-AT-RISK COMPUTATIONS IN STOCHASTIC VOLATILITY MODELS USING SECOND-ORDER WEAK APPROXIMATION SCHEMES ». International Journal of Theoretical and Applied Finance 17, no 01 (février 2014) : 1450004. http://dx.doi.org/10.1142/s0219024914500046.

Texte intégral
Résumé :
We explore the class of second-order weak approximation schemes (cubature methods) for the numerical simulation of joint default probabilities in credit portfolios where the firm's asset value processes are assumed to follow the multivariate Heston stochastic volatility model. Correlation between firms' asset processes is reflected by the dependence on a common set of underlying risk factors. In particular, we consider the Ninomiya–Victoir algorithm and we study the application of this method for the computation of value-at-risk and expected shortfall. Numerical simulations for these quantities for some exogenous portfolios demonstrate the numerical efficiency of the method.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Luo, Zhijian, et Yuntao Qian. « Stochastic sub-sampled Newton method with variance reduction ». International Journal of Wavelets, Multiresolution and Information Processing 17, no 06 (novembre 2019) : 1950041. http://dx.doi.org/10.1142/s0219691319500413.

Texte intégral
Résumé :
Stochastic optimization on large-scale machine learning problems has been developed dramatically since stochastic gradient methods with variance reduction technique were introduced. Several stochastic second-order methods, which approximate curvature information by the Hessian in stochastic setting, have been proposed for improvements. In this paper, we introduce a Stochastic Sub-Sampled Newton method with Variance Reduction (S2NMVR), which incorporates the sub-sampled Newton method and stochastic variance-reduced gradient. For many machine learning problems, the linear time Hessian-vector production provides evidence to the computational efficiency of S2NMVR. We then develop two variations of S2NMVR that preserve the estimation of Hessian inverse and decrease the computational cost of Hessian-vector product for nonlinear problems.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Lamrhari, D., D. Sarsri et M. Rahmoune. « Component mode synthesis and stochastic perturbation method for dynamic analysis of large linear finite element with uncertain parameters ». Journal of Mechanical Engineering and Sciences 14, no 2 (22 juin 2020) : 6753–69. http://dx.doi.org/10.15282/jmes.14.2.2020.17.0529.

Texte intégral
Résumé :
In this paper, a method to calculate the first two moments (mean and variance) of the stochastic time response as well as the frequency functions of large FE models with probabilistic uncertainties in the physical parameters is proposed. This method is based on coupling of second order perturbation method and component mode synthesis methods. Various component mode synthesis methods are used to optimally reduce the size of the model. The analysis of dynamic response of stochastic finite element system can be done in the frequency domain using the frequency transfer functions and in the time domain by a direct integration of the equations of motion, using numerical procedures. The statistical first two moments of dynamic response of the reduced system are obtained by the second order perturbation method. Numerical applications have been developed to highlight effectiveness of the method developed to analyze the stochastic response of large structures.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Garcia-Montoya, Nina, Julienne Kabre, Jorge E. Macías-Díaz et Qin Sheng. « Second-Order Semi-Discretized Schemes for Solving Stochastic Quenching Models on Arbitrary Spatial Grids ». Discrete Dynamics in Nature and Society 2021 (5 mai 2021) : 1–19. http://dx.doi.org/10.1155/2021/5530744.

Texte intégral
Résumé :
Reaction-diffusion-advection equations provide precise interpretations for many important phenomena in complex interactions between natural and artificial systems. This paper studies second-order semi-discretizations for the numerical solution of reaction-diffusion-advection equations modeling quenching types of singularities occurring in numerous applications. Our investigations particularly focus at cases where nonuniform spatial grids are utilized. Detailed derivations and analysis are accomplished. Easy-to-use and highly effective second-order schemes are acquired. Computational experiments are presented to illustrate our results as well as to demonstrate the viability and capability of the new methods for solving singular quenching problems on arbitrary grid platforms.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie