Journal articles on the topic 'Stochastic second order methods'

To see the other types of publications on this topic, follow the link: Stochastic second order methods.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Stochastic second order methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Burrage, Kevin, Ian Lenane, and Grant Lythe. "Numerical Methods for Second‐Order Stochastic Differential Equations." SIAM Journal on Scientific Computing 29, no. 1 (January 2007): 245–64. http://dx.doi.org/10.1137/050646032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tocino, A., and J. Vigo-Aguiar. "Weak Second Order Conditions for Stochastic Runge--Kutta Methods." SIAM Journal on Scientific Computing 24, no. 2 (January 2002): 507–23. http://dx.doi.org/10.1137/s1064827501387814.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Komori, Yoshio. "Weak second-order stochastic Runge–Kutta methods for non-commutative stochastic differential equations." Journal of Computational and Applied Mathematics 206, no. 1 (September 2007): 158–73. http://dx.doi.org/10.1016/j.cam.2006.06.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tang, Xiao, and Aiguo Xiao. "Efficient weak second-order stochastic Runge–Kutta methods for Itô stochastic differential equations." BIT Numerical Mathematics 57, no. 1 (April 26, 2016): 241–60. http://dx.doi.org/10.1007/s10543-016-0618-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Moxnes, John F., and Kjell Hausken. "Introducing Randomness into First-Order and Second-Order Deterministic Differential Equations." Advances in Mathematical Physics 2010 (2010): 1–42. http://dx.doi.org/10.1155/2010/509326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We incorporate randomness into deterministic theories and compare analytically and numerically some well-known stochastic theories: the Liouville process, the Ornstein-Uhlenbeck process, and a process that is Gaussian and exponentially time correlated (Ornstein-Uhlenbeck noise). Different methods of achieving the marginal densities for correlated and uncorrelated noise are discussed. Analytical results are presented for a deterministic linear friction force and a stochastic force that is uncorrelated or exponentially correlated.
6

Rößler, Andreas. "Second Order Runge–Kutta Methods for Itô Stochastic Differential Equations." SIAM Journal on Numerical Analysis 47, no. 3 (January 2009): 1713–38. http://dx.doi.org/10.1137/060673308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rößler, Andreas. "Second order Runge–Kutta methods for Stratonovich stochastic differential equations." BIT Numerical Mathematics 47, no. 3 (May 12, 2007): 657–80. http://dx.doi.org/10.1007/s10543-007-0130-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Xiao, and Hongchao Zhang. "Inexact proximal stochastic second-order methods for nonconvex composite optimization." Optimization Methods and Software 35, no. 4 (January 15, 2020): 808–35. http://dx.doi.org/10.1080/10556788.2020.1713128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Abdulle, Assyr, Gilles Vilmart, and Konstantinos C. Zygalakis. "Weak Second Order Explicit Stabilized Methods for Stiff Stochastic Differential Equations." SIAM Journal on Scientific Computing 35, no. 4 (January 2013): A1792—A1814. http://dx.doi.org/10.1137/12088954x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Komori, Yoshio, and Kevin Burrage. "Weak second order S-ROCK methods for Stratonovich stochastic differential equations." Journal of Computational and Applied Mathematics 236, no. 11 (May 2012): 2895–908. http://dx.doi.org/10.1016/j.cam.2012.01.033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Rathinasamy, Anandaraman, Davood Ahmadian, and Priya Nair. "Second-order balanced stochastic Runge–Kutta methods with multi-dimensional studies." Journal of Computational and Applied Mathematics 377 (October 2020): 112890. http://dx.doi.org/10.1016/j.cam.2020.112890.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Jianling. "Multi-sample test based on bootstrap methods for second order stochastic dominance." Hacettepe Journal of Mathematics and Statistics 44, no. 13 (October 11, 2014): 1. http://dx.doi.org/10.15672/hjms.2014137464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Komori, Yoshio, David Cohen, and Kevin Burrage. "Weak Second Order Explicit Exponential Runge--Kutta Methods for Stochastic Differential Equations." SIAM Journal on Scientific Computing 39, no. 6 (January 2017): A2857—A2878. http://dx.doi.org/10.1137/15m1041341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Khodabin, M., K. Maleknejad, M. Rostami, and M. Nouri. "Numerical solution of stochastic differential equations by second order Runge–Kutta methods." Mathematical and Computer Modelling 53, no. 9-10 (May 2011): 1910–20. http://dx.doi.org/10.1016/j.mcm.2011.01.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Yang, Jie, Weidong Zhao, and Tao Zhou. "Explicit Deferred Correction Methods for Second-Order Forward Backward Stochastic Differential Equations." Journal of Scientific Computing 79, no. 3 (January 3, 2019): 1409–32. http://dx.doi.org/10.1007/s10915-018-00896-w.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Abukhaled, Marwan I., and Edward J. Allen. "EXPECTATION STABILITY OF SECOND-ORDER WEAK NUMERICAL METHODS FOR STOCHASTIC DIFFERENTIAL EQUATIONS." Stochastic Analysis and Applications 20, no. 4 (August 28, 2002): 693–707. http://dx.doi.org/10.1081/sap-120006103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Alzalg, Baha. "Decomposition-based interior point methods for stochastic quadratic second-order cone programming." Applied Mathematics and Computation 249 (December 2014): 1–18. http://dx.doi.org/10.1016/j.amc.2014.10.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Cohen, David, and Magdalena Sigg. "Convergence analysis of trigonometric methods for stiff second-order stochastic differential equations." Numerische Mathematik 121, no. 1 (November 13, 2011): 1–29. http://dx.doi.org/10.1007/s00211-011-0426-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Komori, Yoshio, and Kevin Burrage. "Supplement: Efficient weak second order stochastic Runge–Kutta methods for non-commutative Stratonovich stochastic differential equations." Journal of Computational and Applied Mathematics 235, no. 17 (July 2011): 5326–29. http://dx.doi.org/10.1016/j.cam.2011.04.021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Sabelfeld, Karl K., Dmitry Smirnov, Ivan Dimov, and Venelin Todorov. "A global random walk on grid algorithm for second order elliptic equations." Monte Carlo Methods and Applications 27, no. 4 (October 27, 2021): 325–39. http://dx.doi.org/10.1515/mcma-2021-2097.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract In this paper we develop stochastic simulation methods for solving large systems of linear equations, and focus on two issues: (1) construction of global random walk algorithms (GRW), in particular, for solving systems of elliptic equations on a grid, and (2) development of local stochastic algorithms based on transforms to balanced transition matrix. The GRW method calculates the solution in any desired family of prescribed points of the gird in contrast to the classical stochastic differential equation based Feynman–Kac formula. The use in local random walk methods of balanced transition matrices considerably decreases the variance of the random estimators and hence decreases the computational cost in comparison with the conventional random walk on grids algorithms.
21

Xie, Chenghan, Chenxi Li, Chuwen Zhang, Qi Deng, Dongdong Ge, and Yinyu Ye. "Trust Region Methods for Nonconvex Stochastic Optimization beyond Lipschitz Smoothness." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 14 (March 24, 2024): 16049–57. http://dx.doi.org/10.1609/aaai.v38i14.29537.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In many important machine learning applications, the standard assumption of having a globally Lipschitz continuous gradient may fail to hold. This paper delves into a more general (L0, L1)-smoothness setting, which gains particular significance within the realms of deep neural networks and distributionally robust optimization (DRO). We demonstrate the significant advantage of trust region methods for stochastic nonconvex optimization under such generalized smoothness assumption. We show that first-order trust region methods can recover the normalized and clipped stochastic gradient as special cases and then provide a unified analysis to show their convergence to first-order stationary conditions. Motivated by the important application of DRO, we propose a generalized high-order smoothness condition, under which second-order trust region methods can achieve a complexity of O(epsilon(-3.5)) for convergence to second-order stationary points. By incorporating variance reduction, the second-order trust region method obtains an even better complexity of O(epsilon(-3)), matching the optimal bound for standard smooth optimization. To our best knowledge, this is the first work to show convergence beyond the first-order stationary condition for generalized smooth optimization. Preliminary experiments show that our proposed algorithms perform favorably compared with existing methods.
22

Tang, Xiao, and Aiguo Xiao. "New explicit stabilized stochastic Runge-Kutta methods with weak second order for stiff Itô stochastic differential equations." Numerical Algorithms 82, no. 2 (October 25, 2018): 593–604. http://dx.doi.org/10.1007/s11075-018-0615-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Dentcheva, Darinka, and Andrzej Ruszczyński. "Inverse cutting plane methods for optimization problems with second-order stochastic dominance constraints." Optimization 59, no. 3 (April 2010): 323–38. http://dx.doi.org/10.1080/02331931003696350.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Tocino, A. "Mean-square stability of second-order Runge–Kutta methods for stochastic differential equations." Journal of Computational and Applied Mathematics 175, no. 2 (March 2005): 355–67. http://dx.doi.org/10.1016/j.cam.2004.05.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Abukhaled, Marwan I. "Mean square stability of second-order weak numerical methods for stochastic differential equations." Applied Numerical Mathematics 48, no. 2 (February 2004): 127–34. http://dx.doi.org/10.1016/j.apnum.2003.10.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Ghilli, Daria. "Viscosity methods for large deviations estimates of multiscale stochastic processes." ESAIM: Control, Optimisation and Calculus of Variations 24, no. 2 (January 26, 2018): 605–37. http://dx.doi.org/10.1051/cocv/2017051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We study singular perturbation problems for second order HJB equations in an unbounded setting. The main applications are large deviations estimates for the short maturity asymptotics of stochastic systems affected by a stochastic volatility, where the volatility is modelled by a process evolving at a faster time scale and satisfying some condition implying ergodicity.
27

Yousefi, Hassan, Seyed Shahram Ghorashi, and Timon Rabczuk. "Directly Simulation of Second Order Hyperbolic Systems in Second Order Form via the Regularization Concept." Communications in Computational Physics 20, no. 1 (June 22, 2016): 86–135. http://dx.doi.org/10.4208/cicp.101214.011015a.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractWe present an efficient and robust method for stress wave propagation problems (second order hyperbolic systems) having discontinuities directly in their second order form. Due to the numerical dispersion around discontinuities and lack of the inherent dissipation in hyperbolic systems, proper simulation of such problems are challenging. The proposed idea is to denoise spurious oscillations by a post-processing stage from solutions obtained from higher-order grid-based methods (e.g., high-order collocation or finite-difference schemes). The denoising is done so that the solutions remain higher-order (here, second order) around discontinuities and are still free from spurious oscillations. For this purpose, improved Tikhonov regularization approach is advised. This means to let data themselves select proper denoised solutions (since there is no pre-assumptions about regularized results). The improved approach can directly be done on uniform or non-uniform sampled data in a way that the regularized results maintenance continuous derivatives up to some desired order. It is shown how to improve the smoothing method so that it remains conservative and has local estimating feature. To confirm effectiveness of the proposed approach, finally, some one and two dimensional examples will be provided. It will be shown how both the numerical (artificial) dispersion and dissipation can be controlled around discontinuous solutions and stochastic-like results.
28

ITKIN, ANDREY. "HIGH ORDER SPLITTING METHODS FOR FORWARD PDEs AND PIDEs." International Journal of Theoretical and Applied Finance 18, no. 05 (July 28, 2015): 1550031. http://dx.doi.org/10.1142/s0219024915500314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper is dedicated to the construction of high order (in both space and time) finite-difference schemes for both forward and backward PDEs and PIDEs, such that option prices obtained by solving both the forward and backward equations are consistent. This approach is partly inspired by Andreassen & Huge (2011) who reported a pair of consistent finite-difference schemes of first-order approximation in time for an uncorrelated local stochastic volatility (LSV) model. We extend their approach by constructing schemes that are second-order in both space and time and that apply to models with jumps and discrete dividends. Taking correlation into account in our approach is also not an issue.
29

Yousefi, Mahsa, and Ángeles Martínez. "Deep Neural Networks Training by Stochastic Quasi-Newton Trust-Region Methods." Algorithms 16, no. 10 (October 20, 2023): 490. http://dx.doi.org/10.3390/a16100490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
While first-order methods are popular for solving optimization problems arising in deep learning, they come with some acute deficiencies. To overcome these shortcomings, there has been recent interest in introducing second-order information through quasi-Newton methods that are able to construct Hessian approximations using only gradient information. In this work, we study the performance of stochastic quasi-Newton algorithms for training deep neural networks. We consider two well-known quasi-Newton updates, the limited-memory Broyden–Fletcher–Goldfarb–Shanno (BFGS) and the symmetric rank one (SR1). This study fills a gap concerning the real performance of both updates in the minibatch setting and analyzes whether more efficient training can be obtained when using the more robust BFGS update or the cheaper SR1 formula, which—allowing for indefinite Hessian approximations—can potentially help to better navigate the pathological saddle points present in the non-convex loss functions found in deep learning. We present and discuss the results of an extensive experimental study that includes many aspects affecting performance, like batch normalization, the network architecture, the limited memory parameter or the batch size. Our results show that stochastic quasi-Newton algorithms are efficient and, in some instances, able to outperform the well-known first-order Adam optimizer, run with the optimal combination of its numerous hyperparameters, and the stochastic second-order trust-region STORM algorithm.
30

Abukhaled, M. I., and E. J. Allen. "A class of second-order Runge-Kutta methods for numerical solution of stochastic differential equations." Stochastic Analysis and Applications 16, no. 6 (January 1998): 977–91. http://dx.doi.org/10.1080/07362999808809575.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Rudolf, Gábor, and Andrzej Ruszczyński. "Optimization Problems with Second Order Stochastic Dominance Constraints: Duality, Compact Formulations, and Cut Generation Methods." SIAM Journal on Optimization 19, no. 3 (January 2008): 1326–43. http://dx.doi.org/10.1137/070702473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Ahn, T. H., and A. Sandu. "Implicit Second Order Weak Taylor Tau-Leaping Methods for the Stochastic Simulation of Chemical Kinetics." Procedia Computer Science 4 (2011): 2297–306. http://dx.doi.org/10.1016/j.procs.2011.04.250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Meskarian, Rudabeh, Huifu Xu, and Jörg Fliege. "Numerical methods for stochastic programs with second order dominance constraints with applications to portfolio optimization." European Journal of Operational Research 216, no. 2 (January 2012): 376–85. http://dx.doi.org/10.1016/j.ejor.2011.07.044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Lu, Lu, Yu Yuan, Heng Wang, Xing Zhao, and Jianjie Zheng. "A New Second-Order Tristable Stochastic Resonance Method for Fault Diagnosis." Symmetry 11, no. 8 (August 1, 2019): 965. http://dx.doi.org/10.3390/sym11080965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Vibration signals are used to diagnosis faults of the rolling bearing which is symmetric structure. Stochastic resonance (SR) has been widely applied in weak signal feature extraction in recent years. It can utilize noise and enhance weak signals. However, the traditional SR method has poor performance, and it is difficult to determine parameters of SR. Therefore, a new second-order tristable SR method (STSR) based on a new potential combining the classical bistable potential with Woods-Saxon potential is proposed in this paper. Firstly, the envelope signal of rolling bearings is the input signal of STSR. Then, the output of signal-to-noise ratio (SNR) is used as the fitness function of the Seeker Optimization Algorithm (SOA) in order to optimize the parameters of SR. Finally, the optimal parameters are used to set the STSR system in order to enhance and extract weak signals of rolling bearings. Simulated and experimental signals are used to demonstrate the effectiveness of STSR. The diagnosis results show that the proposed STSR method can obtain higher output SNR and better filtering performance than the traditional SR methods. It provides a new idea for fault diagnosis of rotating machinery.
35

PELLEGRINO, TOMMASO. "SECOND-ORDER STOCHASTIC VOLATILITY ASYMPTOTICS AND THE PRICING OF FOREIGN EXCHANGE DERIVATIVES." International Journal of Theoretical and Applied Finance 23, no. 03 (May 2020): 2050021. http://dx.doi.org/10.1142/s0219024920500211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We consider models for the pricing of foreign exchange derivatives, where the underlying asset volatility as well as the one for the foreign exchange rate are stochastic. Under this framework, singular perturbation methods have been used to derive first-order approximations for European option prices. In this paper, based on a previous result for the calibration and pricing of single underlying options, we derive the second-order approximation pricing formula in the two-dimensional case and we apply it to the pricing of foreign exchange options.
36

Rathinasamy, Anandaraman, and Priya Nair. "Asymptotic mean-square stability of weak second-order balanced stochastic Runge–Kutta methods for multi-dimensional Itô stochastic differential systems." Applied Mathematics and Computation 332 (September 2018): 276–303. http://dx.doi.org/10.1016/j.amc.2018.03.065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Namachchivaya, N. S., and Gerard Leng. "Equivalence of Stochastic Averaging and Stochastic Normal Forms." Journal of Applied Mechanics 57, no. 4 (December 1, 1990): 1011–17. http://dx.doi.org/10.1115/1.2897619.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The equivalence of the methods of stochastic averaging and stochastic normal forms is demonstrated for systems under the effect of linear multiplicative and additive noise. It is shown that both methods lead to reduced systems with the same Markovian approximation. The key result is that the second-order stochastic terms have to be retained in the normal form computation. Examples showing applications to systems undergoing divergence and flutter instability are provided. Furthermore, it is shown that unlike stochastic averaging, stochastic normal forms can be used in the analysis of nilpotent systems to eliminate the stable modes. Finally, some results pertaining to stochastic Lorenz equations are presented.
38

Zhou, Jingcheng, Wei Wei, Ruizhi Zhang, and Zhiming Zheng. "Damped Newton Stochastic Gradient Descent Method for Neural Networks Training." Mathematics 9, no. 13 (June 29, 2021): 1533. http://dx.doi.org/10.3390/math9131533.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
First-order methods such as stochastic gradient descent (SGD) have recently become popular optimization methods to train deep neural networks (DNNs) for good generalization; however, they need a long training time. Second-order methods which can lower the training time are scarcely used on account of their overpriced computing cost to obtain the second-order information. Thus, many works have approximated the Hessian matrix to cut the cost of computing while the approximate Hessian matrix has large deviation. In this paper, we explore the convexity of the Hessian matrix of partial parameters and propose the damped Newton stochastic gradient descent (DN-SGD) method and stochastic gradient descent damped Newton (SGD-DN) method to train DNNs for regression problems with mean square error (MSE) and classification problems with cross-entropy loss (CEL). In contrast to other second-order methods for estimating the Hessian matrix of all parameters, our methods only accurately compute a small part of the parameters, which greatly reduces the computational cost and makes the convergence of the learning process much faster and more accurate than SGD and Adagrad. Several numerical experiments on real datasets were performed to verify the effectiveness of our methods for regression and classification problems.
39

Huang, Xunpeng, Xianfeng Liang, Zhengyang Liu, Lei Li, Yue Yu, and Yitan Li. "SPAN: A Stochastic Projected Approximate Newton Method." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 02 (April 3, 2020): 1520–27. http://dx.doi.org/10.1609/aaai.v34i02.5511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Second-order optimization methods have desirable convergence properties. However, the exact Newton method requires expensive computation for the Hessian and its inverse. In this paper, we propose SPAN, a novel approximate and fast Newton method. SPAN computes the inverse of the Hessian matrix via low-rank approximation and stochastic Hessian-vector products. Our experiments on multiple benchmark datasets demonstrate that SPAN outperforms existing first-order and second-order optimization methods in terms of the convergence wall-clock time. Furthermore, we provide a theoretical analysis of the per-iteration complexity, the approximation error, and the convergence rate. Both the theoretical analysis and experimental results show that our proposed method achieves a better trade-off between the convergence rate and the per-iteration efficiency.
40

Leimkuhler, B., C. Matthews, and M. V. Tretyakov. "On the long-time integration of stochastic gradient systems." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 470, no. 2170 (October 8, 2014): 20140120. http://dx.doi.org/10.1098/rspa.2014.0120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This article addresses the weak convergence of numerical methods for Brownian dynamics. Typical analyses of numerical methods for stochastic differential equations focus on properties such as the weak order which estimates the asymptotic (stepsize h → 0 ) convergence behaviour of the error of finite-time averages. Recently, it has been demonstrated, by study of Fokker–Planck operators, that a non-Markovian numerical method generates approximations in the long-time limit with higher accuracy order (second order) than would be expected from its weak convergence analysis (finite-time averages are first-order accurate). In this article, we describe the transition from the transient to the steady-state regime of this numerical method by estimating the time-dependency of the coefficients in an asymptotic expansion for the weak error, demonstrating that the convergence to second order is exponentially rapid in time. Moreover, we provide numerical tests of the theory, including comparisons of the efficiencies of the Euler–Maruyama method, the popular second-order Heun method, and the non-Markovian method.
41

Rathinasamy, A., and K. Balachandran. "Mean-square stability of second-order Runge–Kutta methods for multi-dimensional linear stochastic differential systems." Journal of Computational and Applied Mathematics 219, no. 1 (September 2008): 170–97. http://dx.doi.org/10.1016/j.cam.2007.07.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Tang, Xiao, and Aiguo Xiao. "Asymptotically optimal approximation of some stochastic integrals and its applications to the strong second-order methods." Advances in Computational Mathematics 45, no. 2 (October 24, 2018): 813–46. http://dx.doi.org/10.1007/s10444-018-9638-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Liu, Yan, Maojun Zhang, Zhiwei Zhong, and Xiangrong Zeng. "AdaCN: An Adaptive Cubic Newton Method for Nonconvex Stochastic Optimization." Computational Intelligence and Neuroscience 2021 (November 10, 2021): 1–11. http://dx.doi.org/10.1155/2021/5790608.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this work, we introduce AdaCN, a novel adaptive cubic Newton method for nonconvex stochastic optimization. AdaCN dynamically captures the curvature of the loss landscape by diagonally approximated Hessian plus the norm of difference between previous two estimates. It only requires at most first order gradients and updates with linear complexity for both time and memory. In order to reduce the variance introduced by the stochastic nature of the problem, AdaCN hires the first and second moment to implement and exponential moving average on iteratively updated stochastic gradients and approximated stochastic Hessians, respectively. We validate AdaCN in extensive experiments, showing that it outperforms other stochastic first order methods (including SGD, Adam, and AdaBound) and stochastic quasi-Newton method (i.e., Apollo), in terms of both convergence speed and generalization performance.
44

Tas, Oktay, Farshad Mirzazadeh Barijough, and Umut Ugurlu. "A TEST OF SECOND-ORDER STOCHASTIC DOMINANCE WITH DIFFERENT WEIGHTING METHODS: EVIDENCE FROM BIST-30 and DJIA." Pressacademia 4, no. 4 (December 23, 2015): 723. http://dx.doi.org/10.17261/pressacademia.2015414538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Vilmart, Gilles. "Weak Second Order Multirevolution Composition Methods for Highly Oscillatory Stochastic Differential Equations with Additive or Multiplicative Noise." SIAM Journal on Scientific Computing 36, no. 4 (January 2014): A1770—A1796. http://dx.doi.org/10.1137/130935331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Debrabant, Kristian, and Andreas Rößler. "Families of efficient second order Runge–Kutta methods for the weak approximation of Itô stochastic differential equations." Applied Numerical Mathematics 59, no. 3-4 (March 2009): 582–94. http://dx.doi.org/10.1016/j.apnum.2008.03.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

LÜTKEBOHMERT, EVA, and LYDIENNE MATCHIE. "VALUE-AT-RISK COMPUTATIONS IN STOCHASTIC VOLATILITY MODELS USING SECOND-ORDER WEAK APPROXIMATION SCHEMES." International Journal of Theoretical and Applied Finance 17, no. 01 (February 2014): 1450004. http://dx.doi.org/10.1142/s0219024914500046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We explore the class of second-order weak approximation schemes (cubature methods) for the numerical simulation of joint default probabilities in credit portfolios where the firm's asset value processes are assumed to follow the multivariate Heston stochastic volatility model. Correlation between firms' asset processes is reflected by the dependence on a common set of underlying risk factors. In particular, we consider the Ninomiya–Victoir algorithm and we study the application of this method for the computation of value-at-risk and expected shortfall. Numerical simulations for these quantities for some exogenous portfolios demonstrate the numerical efficiency of the method.
48

Luo, Zhijian, and Yuntao Qian. "Stochastic sub-sampled Newton method with variance reduction." International Journal of Wavelets, Multiresolution and Information Processing 17, no. 06 (November 2019): 1950041. http://dx.doi.org/10.1142/s0219691319500413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Stochastic optimization on large-scale machine learning problems has been developed dramatically since stochastic gradient methods with variance reduction technique were introduced. Several stochastic second-order methods, which approximate curvature information by the Hessian in stochastic setting, have been proposed for improvements. In this paper, we introduce a Stochastic Sub-Sampled Newton method with Variance Reduction (S2NMVR), which incorporates the sub-sampled Newton method and stochastic variance-reduced gradient. For many machine learning problems, the linear time Hessian-vector production provides evidence to the computational efficiency of S2NMVR. We then develop two variations of S2NMVR that preserve the estimation of Hessian inverse and decrease the computational cost of Hessian-vector product for nonlinear problems.
49

Lamrhari, D., D. Sarsri, and M. Rahmoune. "Component mode synthesis and stochastic perturbation method for dynamic analysis of large linear finite element with uncertain parameters." Journal of Mechanical Engineering and Sciences 14, no. 2 (June 22, 2020): 6753–69. http://dx.doi.org/10.15282/jmes.14.2.2020.17.0529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, a method to calculate the first two moments (mean and variance) of the stochastic time response as well as the frequency functions of large FE models with probabilistic uncertainties in the physical parameters is proposed. This method is based on coupling of second order perturbation method and component mode synthesis methods. Various component mode synthesis methods are used to optimally reduce the size of the model. The analysis of dynamic response of stochastic finite element system can be done in the frequency domain using the frequency transfer functions and in the time domain by a direct integration of the equations of motion, using numerical procedures. The statistical first two moments of dynamic response of the reduced system are obtained by the second order perturbation method. Numerical applications have been developed to highlight effectiveness of the method developed to analyze the stochastic response of large structures.
50

Garcia-Montoya, Nina, Julienne Kabre, Jorge E. Macías-Díaz, and Qin Sheng. "Second-Order Semi-Discretized Schemes for Solving Stochastic Quenching Models on Arbitrary Spatial Grids." Discrete Dynamics in Nature and Society 2021 (May 5, 2021): 1–19. http://dx.doi.org/10.1155/2021/5530744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Reaction-diffusion-advection equations provide precise interpretations for many important phenomena in complex interactions between natural and artificial systems. This paper studies second-order semi-discretizations for the numerical solution of reaction-diffusion-advection equations modeling quenching types of singularities occurring in numerous applications. Our investigations particularly focus at cases where nonuniform spatial grids are utilized. Detailed derivations and analysis are accomplished. Easy-to-use and highly effective second-order schemes are acquired. Computational experiments are presented to illustrate our results as well as to demonstrate the viability and capability of the new methods for solving singular quenching problems on arbitrary grid platforms.

To the bibliography