Artículos de revistas sobre el tema "Stochastic second order methods"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Stochastic second order methods.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Stochastic second order methods".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Burrage, Kevin, Ian Lenane y Grant Lythe. "Numerical Methods for Second‐Order Stochastic Differential Equations". SIAM Journal on Scientific Computing 29, n.º 1 (enero de 2007): 245–64. http://dx.doi.org/10.1137/050646032.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Tocino, A. y J. Vigo-Aguiar. "Weak Second Order Conditions for Stochastic Runge--Kutta Methods". SIAM Journal on Scientific Computing 24, n.º 2 (enero de 2002): 507–23. http://dx.doi.org/10.1137/s1064827501387814.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Komori, Yoshio. "Weak second-order stochastic Runge–Kutta methods for non-commutative stochastic differential equations". Journal of Computational and Applied Mathematics 206, n.º 1 (septiembre de 2007): 158–73. http://dx.doi.org/10.1016/j.cam.2006.06.006.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Tang, Xiao y Aiguo Xiao. "Efficient weak second-order stochastic Runge–Kutta methods for Itô stochastic differential equations". BIT Numerical Mathematics 57, n.º 1 (26 de abril de 2016): 241–60. http://dx.doi.org/10.1007/s10543-016-0618-9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Moxnes, John F. y Kjell Hausken. "Introducing Randomness into First-Order and Second-Order Deterministic Differential Equations". Advances in Mathematical Physics 2010 (2010): 1–42. http://dx.doi.org/10.1155/2010/509326.

Texto completo
Resumen
We incorporate randomness into deterministic theories and compare analytically and numerically some well-known stochastic theories: the Liouville process, the Ornstein-Uhlenbeck process, and a process that is Gaussian and exponentially time correlated (Ornstein-Uhlenbeck noise). Different methods of achieving the marginal densities for correlated and uncorrelated noise are discussed. Analytical results are presented for a deterministic linear friction force and a stochastic force that is uncorrelated or exponentially correlated.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Rößler, Andreas. "Second Order Runge–Kutta Methods for Itô Stochastic Differential Equations". SIAM Journal on Numerical Analysis 47, n.º 3 (enero de 2009): 1713–38. http://dx.doi.org/10.1137/060673308.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Rößler, Andreas. "Second order Runge–Kutta methods for Stratonovich stochastic differential equations". BIT Numerical Mathematics 47, n.º 3 (12 de mayo de 2007): 657–80. http://dx.doi.org/10.1007/s10543-007-0130-3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Wang, Xiao y Hongchao Zhang. "Inexact proximal stochastic second-order methods for nonconvex composite optimization". Optimization Methods and Software 35, n.º 4 (15 de enero de 2020): 808–35. http://dx.doi.org/10.1080/10556788.2020.1713128.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Abdulle, Assyr, Gilles Vilmart y Konstantinos C. Zygalakis. "Weak Second Order Explicit Stabilized Methods for Stiff Stochastic Differential Equations". SIAM Journal on Scientific Computing 35, n.º 4 (enero de 2013): A1792—A1814. http://dx.doi.org/10.1137/12088954x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Komori, Yoshio y Kevin Burrage. "Weak second order S-ROCK methods for Stratonovich stochastic differential equations". Journal of Computational and Applied Mathematics 236, n.º 11 (mayo de 2012): 2895–908. http://dx.doi.org/10.1016/j.cam.2012.01.033.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Rathinasamy, Anandaraman, Davood Ahmadian y Priya Nair. "Second-order balanced stochastic Runge–Kutta methods with multi-dimensional studies". Journal of Computational and Applied Mathematics 377 (octubre de 2020): 112890. http://dx.doi.org/10.1016/j.cam.2020.112890.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Zhang, Jianling. "Multi-sample test based on bootstrap methods for second order stochastic dominance". Hacettepe Journal of Mathematics and Statistics 44, n.º 13 (11 de octubre de 2014): 1. http://dx.doi.org/10.15672/hjms.2014137464.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Komori, Yoshio, David Cohen y Kevin Burrage. "Weak Second Order Explicit Exponential Runge--Kutta Methods for Stochastic Differential Equations". SIAM Journal on Scientific Computing 39, n.º 6 (enero de 2017): A2857—A2878. http://dx.doi.org/10.1137/15m1041341.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Khodabin, M., K. Maleknejad, M. Rostami y M. Nouri. "Numerical solution of stochastic differential equations by second order Runge–Kutta methods". Mathematical and Computer Modelling 53, n.º 9-10 (mayo de 2011): 1910–20. http://dx.doi.org/10.1016/j.mcm.2011.01.018.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Yang, Jie, Weidong Zhao y Tao Zhou. "Explicit Deferred Correction Methods for Second-Order Forward Backward Stochastic Differential Equations". Journal of Scientific Computing 79, n.º 3 (3 de enero de 2019): 1409–32. http://dx.doi.org/10.1007/s10915-018-00896-w.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Abukhaled, Marwan I. y Edward J. Allen. "EXPECTATION STABILITY OF SECOND-ORDER WEAK NUMERICAL METHODS FOR STOCHASTIC DIFFERENTIAL EQUATIONS". Stochastic Analysis and Applications 20, n.º 4 (28 de agosto de 2002): 693–707. http://dx.doi.org/10.1081/sap-120006103.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Alzalg, Baha. "Decomposition-based interior point methods for stochastic quadratic second-order cone programming". Applied Mathematics and Computation 249 (diciembre de 2014): 1–18. http://dx.doi.org/10.1016/j.amc.2014.10.015.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Cohen, David y Magdalena Sigg. "Convergence analysis of trigonometric methods for stiff second-order stochastic differential equations". Numerische Mathematik 121, n.º 1 (13 de noviembre de 2011): 1–29. http://dx.doi.org/10.1007/s00211-011-0426-8.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Komori, Yoshio y Kevin Burrage. "Supplement: Efficient weak second order stochastic Runge–Kutta methods for non-commutative Stratonovich stochastic differential equations". Journal of Computational and Applied Mathematics 235, n.º 17 (julio de 2011): 5326–29. http://dx.doi.org/10.1016/j.cam.2011.04.021.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Sabelfeld, Karl K., Dmitry Smirnov, Ivan Dimov y Venelin Todorov. "A global random walk on grid algorithm for second order elliptic equations". Monte Carlo Methods and Applications 27, n.º 4 (27 de octubre de 2021): 325–39. http://dx.doi.org/10.1515/mcma-2021-2097.

Texto completo
Resumen
Abstract In this paper we develop stochastic simulation methods for solving large systems of linear equations, and focus on two issues: (1) construction of global random walk algorithms (GRW), in particular, for solving systems of elliptic equations on a grid, and (2) development of local stochastic algorithms based on transforms to balanced transition matrix. The GRW method calculates the solution in any desired family of prescribed points of the gird in contrast to the classical stochastic differential equation based Feynman–Kac formula. The use in local random walk methods of balanced transition matrices considerably decreases the variance of the random estimators and hence decreases the computational cost in comparison with the conventional random walk on grids algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Xie, Chenghan, Chenxi Li, Chuwen Zhang, Qi Deng, Dongdong Ge y Yinyu Ye. "Trust Region Methods for Nonconvex Stochastic Optimization beyond Lipschitz Smoothness". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 14 (24 de marzo de 2024): 16049–57. http://dx.doi.org/10.1609/aaai.v38i14.29537.

Texto completo
Resumen
In many important machine learning applications, the standard assumption of having a globally Lipschitz continuous gradient may fail to hold. This paper delves into a more general (L0, L1)-smoothness setting, which gains particular significance within the realms of deep neural networks and distributionally robust optimization (DRO). We demonstrate the significant advantage of trust region methods for stochastic nonconvex optimization under such generalized smoothness assumption. We show that first-order trust region methods can recover the normalized and clipped stochastic gradient as special cases and then provide a unified analysis to show their convergence to first-order stationary conditions. Motivated by the important application of DRO, we propose a generalized high-order smoothness condition, under which second-order trust region methods can achieve a complexity of O(epsilon(-3.5)) for convergence to second-order stationary points. By incorporating variance reduction, the second-order trust region method obtains an even better complexity of O(epsilon(-3)), matching the optimal bound for standard smooth optimization. To our best knowledge, this is the first work to show convergence beyond the first-order stationary condition for generalized smooth optimization. Preliminary experiments show that our proposed algorithms perform favorably compared with existing methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Tang, Xiao y Aiguo Xiao. "New explicit stabilized stochastic Runge-Kutta methods with weak second order for stiff Itô stochastic differential equations". Numerical Algorithms 82, n.º 2 (25 de octubre de 2018): 593–604. http://dx.doi.org/10.1007/s11075-018-0615-y.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Dentcheva, Darinka y Andrzej Ruszczyński. "Inverse cutting plane methods for optimization problems with second-order stochastic dominance constraints". Optimization 59, n.º 3 (abril de 2010): 323–38. http://dx.doi.org/10.1080/02331931003696350.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Tocino, A. "Mean-square stability of second-order Runge–Kutta methods for stochastic differential equations". Journal of Computational and Applied Mathematics 175, n.º 2 (marzo de 2005): 355–67. http://dx.doi.org/10.1016/j.cam.2004.05.019.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Abukhaled, Marwan I. "Mean square stability of second-order weak numerical methods for stochastic differential equations". Applied Numerical Mathematics 48, n.º 2 (febrero de 2004): 127–34. http://dx.doi.org/10.1016/j.apnum.2003.10.006.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Ghilli, Daria. "Viscosity methods for large deviations estimates of multiscale stochastic processes". ESAIM: Control, Optimisation and Calculus of Variations 24, n.º 2 (26 de enero de 2018): 605–37. http://dx.doi.org/10.1051/cocv/2017051.

Texto completo
Resumen
We study singular perturbation problems for second order HJB equations in an unbounded setting. The main applications are large deviations estimates for the short maturity asymptotics of stochastic systems affected by a stochastic volatility, where the volatility is modelled by a process evolving at a faster time scale and satisfying some condition implying ergodicity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Yousefi, Hassan, Seyed Shahram Ghorashi y Timon Rabczuk. "Directly Simulation of Second Order Hyperbolic Systems in Second Order Form via the Regularization Concept". Communications in Computational Physics 20, n.º 1 (22 de junio de 2016): 86–135. http://dx.doi.org/10.4208/cicp.101214.011015a.

Texto completo
Resumen
AbstractWe present an efficient and robust method for stress wave propagation problems (second order hyperbolic systems) having discontinuities directly in their second order form. Due to the numerical dispersion around discontinuities and lack of the inherent dissipation in hyperbolic systems, proper simulation of such problems are challenging. The proposed idea is to denoise spurious oscillations by a post-processing stage from solutions obtained from higher-order grid-based methods (e.g., high-order collocation or finite-difference schemes). The denoising is done so that the solutions remain higher-order (here, second order) around discontinuities and are still free from spurious oscillations. For this purpose, improved Tikhonov regularization approach is advised. This means to let data themselves select proper denoised solutions (since there is no pre-assumptions about regularized results). The improved approach can directly be done on uniform or non-uniform sampled data in a way that the regularized results maintenance continuous derivatives up to some desired order. It is shown how to improve the smoothing method so that it remains conservative and has local estimating feature. To confirm effectiveness of the proposed approach, finally, some one and two dimensional examples will be provided. It will be shown how both the numerical (artificial) dispersion and dissipation can be controlled around discontinuous solutions and stochastic-like results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

ITKIN, ANDREY. "HIGH ORDER SPLITTING METHODS FOR FORWARD PDEs AND PIDEs". International Journal of Theoretical and Applied Finance 18, n.º 05 (28 de julio de 2015): 1550031. http://dx.doi.org/10.1142/s0219024915500314.

Texto completo
Resumen
This paper is dedicated to the construction of high order (in both space and time) finite-difference schemes for both forward and backward PDEs and PIDEs, such that option prices obtained by solving both the forward and backward equations are consistent. This approach is partly inspired by Andreassen & Huge (2011) who reported a pair of consistent finite-difference schemes of first-order approximation in time for an uncorrelated local stochastic volatility (LSV) model. We extend their approach by constructing schemes that are second-order in both space and time and that apply to models with jumps and discrete dividends. Taking correlation into account in our approach is also not an issue.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Yousefi, Mahsa y Ángeles Martínez. "Deep Neural Networks Training by Stochastic Quasi-Newton Trust-Region Methods". Algorithms 16, n.º 10 (20 de octubre de 2023): 490. http://dx.doi.org/10.3390/a16100490.

Texto completo
Resumen
While first-order methods are popular for solving optimization problems arising in deep learning, they come with some acute deficiencies. To overcome these shortcomings, there has been recent interest in introducing second-order information through quasi-Newton methods that are able to construct Hessian approximations using only gradient information. In this work, we study the performance of stochastic quasi-Newton algorithms for training deep neural networks. We consider two well-known quasi-Newton updates, the limited-memory Broyden–Fletcher–Goldfarb–Shanno (BFGS) and the symmetric rank one (SR1). This study fills a gap concerning the real performance of both updates in the minibatch setting and analyzes whether more efficient training can be obtained when using the more robust BFGS update or the cheaper SR1 formula, which—allowing for indefinite Hessian approximations—can potentially help to better navigate the pathological saddle points present in the non-convex loss functions found in deep learning. We present and discuss the results of an extensive experimental study that includes many aspects affecting performance, like batch normalization, the network architecture, the limited memory parameter or the batch size. Our results show that stochastic quasi-Newton algorithms are efficient and, in some instances, able to outperform the well-known first-order Adam optimizer, run with the optimal combination of its numerous hyperparameters, and the stochastic second-order trust-region STORM algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Abukhaled, M. I. y E. J. Allen. "A class of second-order Runge-Kutta methods for numerical solution of stochastic differential equations". Stochastic Analysis and Applications 16, n.º 6 (enero de 1998): 977–91. http://dx.doi.org/10.1080/07362999808809575.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Rudolf, Gábor y Andrzej Ruszczyński. "Optimization Problems with Second Order Stochastic Dominance Constraints: Duality, Compact Formulations, and Cut Generation Methods". SIAM Journal on Optimization 19, n.º 3 (enero de 2008): 1326–43. http://dx.doi.org/10.1137/070702473.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Ahn, T. H. y A. Sandu. "Implicit Second Order Weak Taylor Tau-Leaping Methods for the Stochastic Simulation of Chemical Kinetics". Procedia Computer Science 4 (2011): 2297–306. http://dx.doi.org/10.1016/j.procs.2011.04.250.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Meskarian, Rudabeh, Huifu Xu y Jörg Fliege. "Numerical methods for stochastic programs with second order dominance constraints with applications to portfolio optimization". European Journal of Operational Research 216, n.º 2 (enero de 2012): 376–85. http://dx.doi.org/10.1016/j.ejor.2011.07.044.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Lu, Lu, Yu Yuan, Heng Wang, Xing Zhao y Jianjie Zheng. "A New Second-Order Tristable Stochastic Resonance Method for Fault Diagnosis". Symmetry 11, n.º 8 (1 de agosto de 2019): 965. http://dx.doi.org/10.3390/sym11080965.

Texto completo
Resumen
Vibration signals are used to diagnosis faults of the rolling bearing which is symmetric structure. Stochastic resonance (SR) has been widely applied in weak signal feature extraction in recent years. It can utilize noise and enhance weak signals. However, the traditional SR method has poor performance, and it is difficult to determine parameters of SR. Therefore, a new second-order tristable SR method (STSR) based on a new potential combining the classical bistable potential with Woods-Saxon potential is proposed in this paper. Firstly, the envelope signal of rolling bearings is the input signal of STSR. Then, the output of signal-to-noise ratio (SNR) is used as the fitness function of the Seeker Optimization Algorithm (SOA) in order to optimize the parameters of SR. Finally, the optimal parameters are used to set the STSR system in order to enhance and extract weak signals of rolling bearings. Simulated and experimental signals are used to demonstrate the effectiveness of STSR. The diagnosis results show that the proposed STSR method can obtain higher output SNR and better filtering performance than the traditional SR methods. It provides a new idea for fault diagnosis of rotating machinery.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

PELLEGRINO, TOMMASO. "SECOND-ORDER STOCHASTIC VOLATILITY ASYMPTOTICS AND THE PRICING OF FOREIGN EXCHANGE DERIVATIVES". International Journal of Theoretical and Applied Finance 23, n.º 03 (mayo de 2020): 2050021. http://dx.doi.org/10.1142/s0219024920500211.

Texto completo
Resumen
We consider models for the pricing of foreign exchange derivatives, where the underlying asset volatility as well as the one for the foreign exchange rate are stochastic. Under this framework, singular perturbation methods have been used to derive first-order approximations for European option prices. In this paper, based on a previous result for the calibration and pricing of single underlying options, we derive the second-order approximation pricing formula in the two-dimensional case and we apply it to the pricing of foreign exchange options.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Rathinasamy, Anandaraman y Priya Nair. "Asymptotic mean-square stability of weak second-order balanced stochastic Runge–Kutta methods for multi-dimensional Itô stochastic differential systems". Applied Mathematics and Computation 332 (septiembre de 2018): 276–303. http://dx.doi.org/10.1016/j.amc.2018.03.065.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Namachchivaya, N. S. y Gerard Leng. "Equivalence of Stochastic Averaging and Stochastic Normal Forms". Journal of Applied Mechanics 57, n.º 4 (1 de diciembre de 1990): 1011–17. http://dx.doi.org/10.1115/1.2897619.

Texto completo
Resumen
The equivalence of the methods of stochastic averaging and stochastic normal forms is demonstrated for systems under the effect of linear multiplicative and additive noise. It is shown that both methods lead to reduced systems with the same Markovian approximation. The key result is that the second-order stochastic terms have to be retained in the normal form computation. Examples showing applications to systems undergoing divergence and flutter instability are provided. Furthermore, it is shown that unlike stochastic averaging, stochastic normal forms can be used in the analysis of nilpotent systems to eliminate the stable modes. Finally, some results pertaining to stochastic Lorenz equations are presented.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Zhou, Jingcheng, Wei Wei, Ruizhi Zhang y Zhiming Zheng. "Damped Newton Stochastic Gradient Descent Method for Neural Networks Training". Mathematics 9, n.º 13 (29 de junio de 2021): 1533. http://dx.doi.org/10.3390/math9131533.

Texto completo
Resumen
First-order methods such as stochastic gradient descent (SGD) have recently become popular optimization methods to train deep neural networks (DNNs) for good generalization; however, they need a long training time. Second-order methods which can lower the training time are scarcely used on account of their overpriced computing cost to obtain the second-order information. Thus, many works have approximated the Hessian matrix to cut the cost of computing while the approximate Hessian matrix has large deviation. In this paper, we explore the convexity of the Hessian matrix of partial parameters and propose the damped Newton stochastic gradient descent (DN-SGD) method and stochastic gradient descent damped Newton (SGD-DN) method to train DNNs for regression problems with mean square error (MSE) and classification problems with cross-entropy loss (CEL). In contrast to other second-order methods for estimating the Hessian matrix of all parameters, our methods only accurately compute a small part of the parameters, which greatly reduces the computational cost and makes the convergence of the learning process much faster and more accurate than SGD and Adagrad. Several numerical experiments on real datasets were performed to verify the effectiveness of our methods for regression and classification problems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Huang, Xunpeng, Xianfeng Liang, Zhengyang Liu, Lei Li, Yue Yu y Yitan Li. "SPAN: A Stochastic Projected Approximate Newton Method". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 02 (3 de abril de 2020): 1520–27. http://dx.doi.org/10.1609/aaai.v34i02.5511.

Texto completo
Resumen
Second-order optimization methods have desirable convergence properties. However, the exact Newton method requires expensive computation for the Hessian and its inverse. In this paper, we propose SPAN, a novel approximate and fast Newton method. SPAN computes the inverse of the Hessian matrix via low-rank approximation and stochastic Hessian-vector products. Our experiments on multiple benchmark datasets demonstrate that SPAN outperforms existing first-order and second-order optimization methods in terms of the convergence wall-clock time. Furthermore, we provide a theoretical analysis of the per-iteration complexity, the approximation error, and the convergence rate. Both the theoretical analysis and experimental results show that our proposed method achieves a better trade-off between the convergence rate and the per-iteration efficiency.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Leimkuhler, B., C. Matthews y M. V. Tretyakov. "On the long-time integration of stochastic gradient systems". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 470, n.º 2170 (8 de octubre de 2014): 20140120. http://dx.doi.org/10.1098/rspa.2014.0120.

Texto completo
Resumen
This article addresses the weak convergence of numerical methods for Brownian dynamics. Typical analyses of numerical methods for stochastic differential equations focus on properties such as the weak order which estimates the asymptotic (stepsize h → 0 ) convergence behaviour of the error of finite-time averages. Recently, it has been demonstrated, by study of Fokker–Planck operators, that a non-Markovian numerical method generates approximations in the long-time limit with higher accuracy order (second order) than would be expected from its weak convergence analysis (finite-time averages are first-order accurate). In this article, we describe the transition from the transient to the steady-state regime of this numerical method by estimating the time-dependency of the coefficients in an asymptotic expansion for the weak error, demonstrating that the convergence to second order is exponentially rapid in time. Moreover, we provide numerical tests of the theory, including comparisons of the efficiencies of the Euler–Maruyama method, the popular second-order Heun method, and the non-Markovian method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Rathinasamy, A. y K. Balachandran. "Mean-square stability of second-order Runge–Kutta methods for multi-dimensional linear stochastic differential systems". Journal of Computational and Applied Mathematics 219, n.º 1 (septiembre de 2008): 170–97. http://dx.doi.org/10.1016/j.cam.2007.07.019.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Tang, Xiao y Aiguo Xiao. "Asymptotically optimal approximation of some stochastic integrals and its applications to the strong second-order methods". Advances in Computational Mathematics 45, n.º 2 (24 de octubre de 2018): 813–46. http://dx.doi.org/10.1007/s10444-018-9638-0.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Liu, Yan, Maojun Zhang, Zhiwei Zhong y Xiangrong Zeng. "AdaCN: An Adaptive Cubic Newton Method for Nonconvex Stochastic Optimization". Computational Intelligence and Neuroscience 2021 (10 de noviembre de 2021): 1–11. http://dx.doi.org/10.1155/2021/5790608.

Texto completo
Resumen
In this work, we introduce AdaCN, a novel adaptive cubic Newton method for nonconvex stochastic optimization. AdaCN dynamically captures the curvature of the loss landscape by diagonally approximated Hessian plus the norm of difference between previous two estimates. It only requires at most first order gradients and updates with linear complexity for both time and memory. In order to reduce the variance introduced by the stochastic nature of the problem, AdaCN hires the first and second moment to implement and exponential moving average on iteratively updated stochastic gradients and approximated stochastic Hessians, respectively. We validate AdaCN in extensive experiments, showing that it outperforms other stochastic first order methods (including SGD, Adam, and AdaBound) and stochastic quasi-Newton method (i.e., Apollo), in terms of both convergence speed and generalization performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Tas, Oktay, Farshad Mirzazadeh Barijough y Umut Ugurlu. "A TEST OF SECOND-ORDER STOCHASTIC DOMINANCE WITH DIFFERENT WEIGHTING METHODS: EVIDENCE FROM BIST-30 and DJIA". Pressacademia 4, n.º 4 (23 de diciembre de 2015): 723. http://dx.doi.org/10.17261/pressacademia.2015414538.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Vilmart, Gilles. "Weak Second Order Multirevolution Composition Methods for Highly Oscillatory Stochastic Differential Equations with Additive or Multiplicative Noise". SIAM Journal on Scientific Computing 36, n.º 4 (enero de 2014): A1770—A1796. http://dx.doi.org/10.1137/130935331.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Debrabant, Kristian y Andreas Rößler. "Families of efficient second order Runge–Kutta methods for the weak approximation of Itô stochastic differential equations". Applied Numerical Mathematics 59, n.º 3-4 (marzo de 2009): 582–94. http://dx.doi.org/10.1016/j.apnum.2008.03.012.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

LÜTKEBOHMERT, EVA y LYDIENNE MATCHIE. "VALUE-AT-RISK COMPUTATIONS IN STOCHASTIC VOLATILITY MODELS USING SECOND-ORDER WEAK APPROXIMATION SCHEMES". International Journal of Theoretical and Applied Finance 17, n.º 01 (febrero de 2014): 1450004. http://dx.doi.org/10.1142/s0219024914500046.

Texto completo
Resumen
We explore the class of second-order weak approximation schemes (cubature methods) for the numerical simulation of joint default probabilities in credit portfolios where the firm's asset value processes are assumed to follow the multivariate Heston stochastic volatility model. Correlation between firms' asset processes is reflected by the dependence on a common set of underlying risk factors. In particular, we consider the Ninomiya–Victoir algorithm and we study the application of this method for the computation of value-at-risk and expected shortfall. Numerical simulations for these quantities for some exogenous portfolios demonstrate the numerical efficiency of the method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Luo, Zhijian y Yuntao Qian. "Stochastic sub-sampled Newton method with variance reduction". International Journal of Wavelets, Multiresolution and Information Processing 17, n.º 06 (noviembre de 2019): 1950041. http://dx.doi.org/10.1142/s0219691319500413.

Texto completo
Resumen
Stochastic optimization on large-scale machine learning problems has been developed dramatically since stochastic gradient methods with variance reduction technique were introduced. Several stochastic second-order methods, which approximate curvature information by the Hessian in stochastic setting, have been proposed for improvements. In this paper, we introduce a Stochastic Sub-Sampled Newton method with Variance Reduction (S2NMVR), which incorporates the sub-sampled Newton method and stochastic variance-reduced gradient. For many machine learning problems, the linear time Hessian-vector production provides evidence to the computational efficiency of S2NMVR. We then develop two variations of S2NMVR that preserve the estimation of Hessian inverse and decrease the computational cost of Hessian-vector product for nonlinear problems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Lamrhari, D., D. Sarsri y M. Rahmoune. "Component mode synthesis and stochastic perturbation method for dynamic analysis of large linear finite element with uncertain parameters". Journal of Mechanical Engineering and Sciences 14, n.º 2 (22 de junio de 2020): 6753–69. http://dx.doi.org/10.15282/jmes.14.2.2020.17.0529.

Texto completo
Resumen
In this paper, a method to calculate the first two moments (mean and variance) of the stochastic time response as well as the frequency functions of large FE models with probabilistic uncertainties in the physical parameters is proposed. This method is based on coupling of second order perturbation method and component mode synthesis methods. Various component mode synthesis methods are used to optimally reduce the size of the model. The analysis of dynamic response of stochastic finite element system can be done in the frequency domain using the frequency transfer functions and in the time domain by a direct integration of the equations of motion, using numerical procedures. The statistical first two moments of dynamic response of the reduced system are obtained by the second order perturbation method. Numerical applications have been developed to highlight effectiveness of the method developed to analyze the stochastic response of large structures.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Garcia-Montoya, Nina, Julienne Kabre, Jorge E. Macías-Díaz y Qin Sheng. "Second-Order Semi-Discretized Schemes for Solving Stochastic Quenching Models on Arbitrary Spatial Grids". Discrete Dynamics in Nature and Society 2021 (5 de mayo de 2021): 1–19. http://dx.doi.org/10.1155/2021/5530744.

Texto completo
Resumen
Reaction-diffusion-advection equations provide precise interpretations for many important phenomena in complex interactions between natural and artificial systems. This paper studies second-order semi-discretizations for the numerical solution of reaction-diffusion-advection equations modeling quenching types of singularities occurring in numerous applications. Our investigations particularly focus at cases where nonuniform spatial grids are utilized. Detailed derivations and analysis are accomplished. Easy-to-use and highly effective second-order schemes are acquired. Computational experiments are presented to illustrate our results as well as to demonstrate the viability and capability of the new methods for solving singular quenching problems on arbitrary grid platforms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía