Literatura científica selecionada sobre o tema "Stochastic Newton algorithms"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Stochastic Newton algorithms".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Stochastic Newton algorithms"

1

Kovacevic, Ivana, Branko Kovacevic e Zeljko Djurovic. "On strong consistency of a class of recursive stochastic Newton-Raphson type algorithms with application to robust linear dynamic system identification". Facta universitatis - series: Electronics and Energetics 21, n.º 1 (2008): 1–21. http://dx.doi.org/10.2298/fuee0801001k.

Texto completo da fonte
Resumo:
The recursive stochastic algorithms for estimating the parameters of linear discrete-time dynamic systems in the presence of disturbance uncertainty has been considered in the paper. Problems related to the construction of min-max optimal recursive algorithms are demonstrated. In addition, the robustness of the proposed algorithms has been addressed. Since the min-max optimal solution cannot be achieved in practice, an approximate optimal solution based on a recursive stochastic Newton-Raphson type procedure is suggested. The convergence of the proposed practically applicable robustified recursive algorithm is established theoretically using the martingale theory. Both theoretical and experimental analysis related to the practical robustness of the proposed algorithm are also included. .
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Yousefi, Mahsa, e Ángeles Martínez. "Deep Neural Networks Training by Stochastic Quasi-Newton Trust-Region Methods". Algorithms 16, n.º 10 (20 de outubro de 2023): 490. http://dx.doi.org/10.3390/a16100490.

Texto completo da fonte
Resumo:
While first-order methods are popular for solving optimization problems arising in deep learning, they come with some acute deficiencies. To overcome these shortcomings, there has been recent interest in introducing second-order information through quasi-Newton methods that are able to construct Hessian approximations using only gradient information. In this work, we study the performance of stochastic quasi-Newton algorithms for training deep neural networks. We consider two well-known quasi-Newton updates, the limited-memory Broyden–Fletcher–Goldfarb–Shanno (BFGS) and the symmetric rank one (SR1). This study fills a gap concerning the real performance of both updates in the minibatch setting and analyzes whether more efficient training can be obtained when using the more robust BFGS update or the cheaper SR1 formula, which—allowing for indefinite Hessian approximations—can potentially help to better navigate the pathological saddle points present in the non-convex loss functions found in deep learning. We present and discuss the results of an extensive experimental study that includes many aspects affecting performance, like batch normalization, the network architecture, the limited memory parameter or the batch size. Our results show that stochastic quasi-Newton algorithms are efficient and, in some instances, able to outperform the well-known first-order Adam optimizer, run with the optimal combination of its numerous hyperparameters, and the stochastic second-order trust-region STORM algorithm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Forneron, Jean-Jacques, e Serena Ng. "Estimation and Inference by Stochastic Optimization: Three Examples". AEA Papers and Proceedings 111 (1 de maio de 2021): 626–30. http://dx.doi.org/10.1257/pandp.20211038.

Texto completo da fonte
Resumo:
This paper illustrates two algorithms designed in Forneron and Ng (2020): the resampled Newton-Raphson (rNR) and resampled quasi-Newton (rQN) algorithms, which speed up estimation and bootstrap inference for structural models. An empirical application to BLP shows that computation time decreases from nearly five hours with the standard bootstrap to just over one hour with rNR and to only 15 minutes using rQN. A first Monte Carlo exercise illustrates the accuracy of the method for estimation and inference in a probit IV regression. A second exercise additionally illustrates statistical efficiency gains relative to standard estimation for simulation-based estimation using a dynamic panel regression example.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Cao, Pengfei, e Xionglin Luo. "Performance analysis of multi-innovation stochastic Newton recursive algorithms". Digital Signal Processing 56 (setembro de 2016): 15–23. http://dx.doi.org/10.1016/j.dsp.2016.05.005.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Ghoshdastidar, Debarghya, Ambedkar Dukkipati e Shalabh Bhatnagar. "Newton-based stochastic optimization using q-Gaussian smoothed functional algorithms". Automatica 50, n.º 10 (outubro de 2014): 2606–14. http://dx.doi.org/10.1016/j.automatica.2014.08.021.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Shao, Wei, e Guangbao Guo. "Multiple-Try Simulated Annealing Algorithm for Global Optimization". Mathematical Problems in Engineering 2018 (17 de julho de 2018): 1–11. http://dx.doi.org/10.1155/2018/9248318.

Texto completo da fonte
Resumo:
Simulated annealing is a widely used algorithm for the computation of global optimization problems in computational chemistry and industrial engineering. However, global optimum values cannot always be reached by simulated annealing without a logarithmic cooling schedule. In this study, we propose a new stochastic optimization algorithm, i.e., simulated annealing based on the multiple-try Metropolis method, which combines simulated annealing and the multiple-try Metropolis algorithm. The proposed algorithm functions with a rapidly decreasing schedule, while guaranteeing global optimum values. Simulated and real data experiments including a mixture normal model and nonlinear Bayesian model indicate that the proposed algorithm can significantly outperform other approximated algorithms, including simulated annealing and the quasi-Newton method.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Gao, Guohua, Gaoming Li e Albert Coburn Reynolds. "A Stochastic Optimization Algorithm for Automatic History Matching". SPE Journal 12, n.º 02 (1 de junho de 2007): 196–208. http://dx.doi.org/10.2118/90065-pa.

Texto completo da fonte
Resumo:
Summary For large- scale history- matching problems, optimization algorithms which require only the gradient of the objective function and avoid explicit computation of the Hessian appear to be the best approach. Unfortunately, such algorithms have not been extensively used in practice because computation of the gradient of the objective function by the adjoint method requires explicit knowledge of the simulator numerics and expertise in simulation development. Here we apply the simultaneous perturbation stochastic approximation (SPSA) method to history match multiphase flow production data. SPSA, which has recently attracted considerable international attention in a variety of disciplines, can be easily combined with any reservoir simulator to do automatic history matching. The SPSA method uses stochastic simultaneous perturbation of all parameters to generate a down hill search direction at each iteration. The theoretical basis for this probabilistic perturbation is that the expectation of the search direction generated is the steepest descent direction. We present modifications for improvement in the convergence behavior of the SPSA algorithm for history matching and compare its performance to the steepest descent, gradual deformation and LBFGS algorithm. Although the convergence properties of the SPSA algorithm are not nearly as good as our most recent implementation of a quasi-Newton method (LBFGS), the SPSA algorithm is not simulator specific and it requires only a few hours of work to combine SPSA with any commercial reservoir simulator to do automatic history matching. To the best of our knowledge, this is the first introduction of SPSA into the history matching literature. Thus, we make considerable effort to put it in a proper context.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Wang, Qing, e Yang Cao. "Stochastic Finite Element Method for Nonlinear Dynamic Problem with Random Parameters". Advanced Materials Research 189-193 (fevereiro de 2011): 1348–57. http://dx.doi.org/10.4028/www.scientific.net/amr.189-193.1348.

Texto completo da fonte
Resumo:
Several algorithms were proposed relating to the development of a framework of the perturbation-based stochastic finite element method (PSFEM) for nonlinear dynamic problem with random parameters, for this purpose, based on the stochastic virtual work principle , some algorithms and a framework related to SFEM have been studied. An interpolation method was used to discretize the random fields, which is based on representing the random field in terms of an interpolation rule involving a set of deterministic shape functions. Direct integration Wilson- Method in conjunction with Newton-Raphson scheme was adopted to solve finite element equations. Numerical examples were compared with Monte-Carlo simulation method to show that the approaches proposed herein are accurate and effective for the nonlinear dynamic analysis of structures with random parameters
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Wang, Yanshan, In-Chan Choi e Hongfang Liu. "Generalized ensemble model for document ranking in information retrieval". Computer Science and Information Systems 14, n.º 1 (2017): 123–51. http://dx.doi.org/10.2298/csis160229042w.

Texto completo da fonte
Resumo:
A generalized ensemble model (gEnM) for document ranking is proposed in this paper. The gEnM linearly combines the document retrieval models and tries to retrieve relevant documents at high positions. In order to obtain the optimal linear combination of multiple document retrieval models or rankers, an optimization program is formulated by directly maximizing the mean average precision. Both supervised and unsupervised learning algorithms are presented to solve this program. For the supervised scheme, two approaches are considered based on the data setting, namely batch and online setting. In the batch setting, we propose a revised Newton?s algorithm, gEnM.BAT, by approximating the derivative and Hessian matrix. In the online setting, we advocate a stochastic gradient descent (SGD) based algorithm-gEnM.ON. As for the unsupervised scheme, an unsupervised ensemble model (UnsEnM) by iteratively co-learning from each constituent ranker is presented. Experimental study on benchmark data sets verifies the effectiveness of the proposed algorithms. Therefore, with appropriate algorithms, the gEnM is a viable option in diverse practical information retrieval applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Clayton, R. P., e R. F. Martinez-Botas. "Application of generic algorithms in aerodynamic optimisation design procedures". Aeronautical Journal 108, n.º 1090 (dezembro de 2004): 611–20. http://dx.doi.org/10.1017/s0001924000000440.

Texto completo da fonte
Resumo:
AbstractDirect optimisation techniques using different methods are presented and compared for the solution of two common flows: a two dimensional diffuser and a drag minimisation problem of a fixed area body. The methods studied are a truncated Newton algorithm (gradient method), a simplex approach (direct search method) and a genetic algorithm (stochastic method). The diffuser problem has a known solution supported by experimental data, it has one design performance measure (the pressure coefficient) and two design variables. The fixed area body also has one performance measure (the drag coefficient), but this time there are four design variables; no experimental data is available, this computation is performed to assess the speed/progression of solution.In all cases the direct search approach (simplex method) required significantly smaller number of evaluations than the generic algorithm method. The simplest approach, the gradient method (Newton) performed equally to the simplex approach for the diffuser problem but it was unable to provide a solution to the four-variable problem of a fixed area body drag minimisation. The level of robustness obtained by the use of generic algorithm is in principle superior to the other methods, but a large price in terms of evaluations has to be paid.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Stochastic Newton algorithms"

1

Lu, Wei. "Μéthοdes stοchastiques du secοnd οrdre pοur le traitement séquentiel de dοnnées massives". Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMIR13.

Texto completo da fonte
Resumo:
Avec le développement rapide des technologies et l'acquisition de données de plus en plus massives, les méthodes capables de traiter les données de manière séquentielle (en ligne) sont devenues indispensables. Parmi ces méthodes, les algorithmes de gradient stochastique se sont imposés pour estimer le minimiseur d'une fonction exprimée comme l'espérance d'une fonction aléatoire. Bien qu'ils soient devenus incontournables, ces algorithmes rencontrent des difficultés lorsque le problème est mal conditionné. Dans cette thèse, nous nous intéressons sur les algorithmes stochastiques du second ordre, tels que ceux de type Newton, et leurs applications à diverses problématiques statistiques et d'optimisation. Après avoir établi des bases théoriques et exposé les motivations qui nous amènent à explorer les algorithmes de Newton stochastiques, nous développons les différentes contributions de cette thèse. La première contribution concerne l'étude et le développement d'algorithmes de Newton stochastiques pour la régression linéaire ridge et la régression logistique ridge. Ces algorithmes sont basés sur la formule de Riccati (Sherman-Morrison) pour estimer récursivement l'inverse de la Hessienne. Comme l'acquisition de données massives s'accompagne généralement d'une contamination de ces dernières, on s'intéresse, dans une deuxième contribution, à l'estimation en ligne de la médiane géométrique, qui est un indicateur robuste, i.e. peu sensible à la présence de données atypiques. Plus précisément, nous proposons un nouvel estimateur de Newton stochastique pour estimer la médiane géométrique. Dans les deux premières contributions, les estimateurs des inverses de Hessienne sont construits à l'aide de la formule de Riccati, mais cela n'est possible que pour certaines fonctions. Ainsi, notre troisième contribution introduit une nouvelle méthode de type Robbins-Monro pour l'estimation en ligne de l'inverse de la Hessienne, nous permettant ensuite de développer des algorithmes de Newton stochastiques dits universels. Enfin, notre dernière contribution se focalise sur des algorithmes de type Full Adagrad, où la difficulté réside dans le fait que l'on a un pas adaptatif basé sur la racine carré de l'inverse de la variance du gradient. On propose donc un algorithme de type Robbins-Monro pour estimer cette matrice, nous permettant ainsi de proposer une approche récursive pour Full AdaGrad et sa version streaming, avec des coûts de calcul réduits. Pour tous les nouveaux estimateurs que nous proposons, nous établissons leurs vitesses de convergence ainsi que leur efficacité asymptotique. De plus, nous illustrons l'efficacité de ces algorithmes à l'aide de simulations numériques et en les appliquant à des données réelles
With the rapid development of technologies and the acquisition of big data, methods capable of processing data sequentially (online) have become indispensable. Among these methods, stochastic gradient algorithms have been established for estimating the minimizer of a function expressed as the expectation of a random function. Although they have become essential, these algorithms encounter difficulties when the problem is ill-conditioned. In this thesis, we focus on second-order stochastic algorithms, such as those of the Newton type, and their applications to various statistical and optimization problems. After establishing theoretical foundations and exposing the motivations that lead us to explore stochastic Newton algorithms, we develop the various contributions of this thesis. The first contribution concerns the study and development of stochastic Newton algorithms for ridge linear regression and ridge logistic regression. These algorithms are based on the Riccati formula (Sherman-Morrison) to recursively estimate the inverse of the Hessian. As the acquisition of big data is generally accompanied by a contamination of the latter, in a second contribution, we focus on the online estimation of the geometric median, which is a robust indicator, i.e., not very sensitive to the presence of atypical data. More specifically, we propose a new stochastic Newton estimator to estimate the geometric median. In the first two contributions, the estimators of the Hessians' inverses are constructed using the Riccati formula, but this is only possible for certain functions. Thus, our third contribution introduces a new Robbins-Monro type method for online estimation of the Hessian's inverse, allowing us then to develop universal stochastic Newton algorithms. Finally, our last contribution focuses on Full Adagrad type algorithms, where the difficulty lies in the fact that there is an adaptive step based on the square root of the inverse of the gradient's covariance. We thus propose a Robbins-Monro type algorithm to estimate this matrix, allowing us to propose a recursive approach for Full AdaGrad and its streaming version, with reduced computational costs. For all the new estimators we propose, we establish their convergence rates as well as their asymptotic efficiency. Moreover, we illustrate the efficiency of these algorithms using numerical simulations and by applying them to real data
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Stewart, Alistair Mark. "Efficient algorithms for infinite-state recursive stochastic models and Newton's method". Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/10001.

Texto completo da fonte
Resumo:
Some well-studied infinite-state stochastic models give rise to systems of nonlinear equations. These systems of equations have solutions that are probabilities, generally probabilities of termination in the model. We are interested in finding efficient, preferably polynomial time, algorithms for calculating probabilities associated with these models. The chief tool we use to solve systems of polynomial equations will be Newton’s method as suggested by [EY09]. The main contribution of this thesis is to the analysis of this and related algorithms. We give polynomial-time algorithms for calculating probabilities for broad classes of models for which none were known before. Stochastic models that give rise to such systems of equations include such classic and heavily-studied models as Multi-type Branching Processes, Stochastic Context- Free Grammars(SCFGs) and Quasi Birth-Death Processes. We also consider models that give rise to infinite-state Markov Decision Processes (MDPs) by giving algorithms for approximating optimal probabilities and finding policies that give probabilities close to the optimal probability, in several classes of infinite-state MDPs. Our algorithms for analysing infinite-state MDPs rely on a non-trivial generalization of Newton’s method that works for the max/min polynomial systems that arise as Bellman optimality equations in these models. For SCFGs, which are used in statistical natural language processing, in addition to approximating termination probabilities, we analyse algorithms for approximating the probability that a grammar produces a given string, or produces a string in a given regular language. In most cases, we show that we can calculate an approximation to the relevant probability in time polynomial in the size of the model and the number of bits of desired precision. We also consider more general systems of monotone polynomial equations. For such systems we cannot give a polynomial-time algorithm, which pre-existing hardness results render unlikely, but we can still give an algorithm with a complexity upper bound which is exponential only in some parameters that are likely to be bounded for the monotone polynomial equations that arise for many interesting stochastic models.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Lakshmanan, K. "Online Learning and Simulation Based Algorithms for Stochastic Optimization". Thesis, 2012. http://etd.iisc.ac.in/handle/2005/3245.

Texto completo da fonte
Resumo:
In many optimization problems, the relationship between the objective and parameters is not known. The objective function itself may be stochastic such as a long-run average over some random cost samples. In such cases finding the gradient of the objective is not possible. It is in this setting that stochastic approximation algorithms are used. These algorithms use some estimates of the gradient and are stochastic in nature. Amongst gradient estimation techniques, Simultaneous Perturbation Stochastic Approximation (SPSA) and Smoothed Functional(SF) scheme are widely used. In this thesis we have proposed a novel multi-time scale quasi-Newton based smoothed functional (QN-SF) algorithm for unconstrained as well as constrained optimization. The algorithm uses the smoothed functional scheme for estimating the gradient and the quasi-Newton method to solve the optimization problem. The algorithm is shown to converge with probability one. We have also provided here experimental results on the problem of optimal routing in a multi-stage network of queues. Policies like Join the Shortest Queue or Least Work Left assume knowledge of the queue length values that can change rapidly or hard to estimate. If the only information available is the expected end-to-end delay as with our case, such policies cannot be used. The QN-SF based probabilistic routing algorithm uses only the total end-to-end delay for tuning the probabilities. We observe from the experiments that the QN-SF algorithm has better performance than the gradient and Jacobi versions of Newton based smoothed functional algorithms. Next we consider constrained routing in a similar queueing network. We extend the QN-SF algorithm to this case. We study the convergence behavior of the algorithm and observe that the constraints are satisfied at the point of convergence. We provide experimental results for the constrained routing setup as well. Next we study reinforcement learning algorithms which are useful for solving Markov Decision Process(MDP) when the precise information on transition probabilities is not known. When the state, and action sets are very large, it is not possible to store all the state-action tuples. In such cases, function approximators like neural networks have been used. The popular Q-learning algorithm is known to diverge when used with linear function approximation due to the ’off-policy’ problem. Hence developing stable learning algorithms when used with function approximation is an important problem. We present in this thesis a variant of Q-learning with linear function approximation that is based on two-timescale stochastic approximation. The Q-value parameters for a given policy in our algorithm are updated on the slower timescale while the policy parameters themselves are updated on the faster scale. We perform a gradient search in the space of policy parameters. Since the objective function and hence the gradient are not analytically known, we employ the efficient one-simulation simultaneous perturbation stochastic approximation(SPSA) gradient estimates that employ Hadamard matrix based deterministic perturbations. Our algorithm has the advantage that, unlike Q-learning, it does not suffer from high oscillations due to the off-policy problem when using function approximators. Whereas it is difficult to prove convergence of regular Q-learning with linear function approximation because of the off-policy problem, we prove that our algorithm which is on-policy is convergent. Numerical results on a multi-stage stochastic shortest path problem show that our algorithm exhibits significantly better performance and is more robust as compared to Q-learning. Future work would be to compare it with other policy-based reinforcement learning algorithms. Finally, we develop an online actor-critic reinforcement learning algorithm with function approximation for a problem of control under inequality constraints. We consider the long-run average cost Markov decision process(MDP) framework in which both the objective and the constraint functions are suitable policy-dependent long-run averages of certain sample path functions. The Lagrange multiplier method is used to handle the inequality constraints. We prove the asymptotic almost sure convergence of our algorithm to a locally optimal solution. We also provide the results of numerical experiments on a problem of routing in a multistage queueing network with constraints on long-run average queue lengths. We observe that our algorithm exhibits good performance on this setting and converges to a feasible point.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Lakshmanan, K. "Online Learning and Simulation Based Algorithms for Stochastic Optimization". Thesis, 2012. http://hdl.handle.net/2005/3245.

Texto completo da fonte
Resumo:
In many optimization problems, the relationship between the objective and parameters is not known. The objective function itself may be stochastic such as a long-run average over some random cost samples. In such cases finding the gradient of the objective is not possible. It is in this setting that stochastic approximation algorithms are used. These algorithms use some estimates of the gradient and are stochastic in nature. Amongst gradient estimation techniques, Simultaneous Perturbation Stochastic Approximation (SPSA) and Smoothed Functional(SF) scheme are widely used. In this thesis we have proposed a novel multi-time scale quasi-Newton based smoothed functional (QN-SF) algorithm for unconstrained as well as constrained optimization. The algorithm uses the smoothed functional scheme for estimating the gradient and the quasi-Newton method to solve the optimization problem. The algorithm is shown to converge with probability one. We have also provided here experimental results on the problem of optimal routing in a multi-stage network of queues. Policies like Join the Shortest Queue or Least Work Left assume knowledge of the queue length values that can change rapidly or hard to estimate. If the only information available is the expected end-to-end delay as with our case, such policies cannot be used. The QN-SF based probabilistic routing algorithm uses only the total end-to-end delay for tuning the probabilities. We observe from the experiments that the QN-SF algorithm has better performance than the gradient and Jacobi versions of Newton based smoothed functional algorithms. Next we consider constrained routing in a similar queueing network. We extend the QN-SF algorithm to this case. We study the convergence behavior of the algorithm and observe that the constraints are satisfied at the point of convergence. We provide experimental results for the constrained routing setup as well. Next we study reinforcement learning algorithms which are useful for solving Markov Decision Process(MDP) when the precise information on transition probabilities is not known. When the state, and action sets are very large, it is not possible to store all the state-action tuples. In such cases, function approximators like neural networks have been used. The popular Q-learning algorithm is known to diverge when used with linear function approximation due to the ’off-policy’ problem. Hence developing stable learning algorithms when used with function approximation is an important problem. We present in this thesis a variant of Q-learning with linear function approximation that is based on two-timescale stochastic approximation. The Q-value parameters for a given policy in our algorithm are updated on the slower timescale while the policy parameters themselves are updated on the faster scale. We perform a gradient search in the space of policy parameters. Since the objective function and hence the gradient are not analytically known, we employ the efficient one-simulation simultaneous perturbation stochastic approximation(SPSA) gradient estimates that employ Hadamard matrix based deterministic perturbations. Our algorithm has the advantage that, unlike Q-learning, it does not suffer from high oscillations due to the off-policy problem when using function approximators. Whereas it is difficult to prove convergence of regular Q-learning with linear function approximation because of the off-policy problem, we prove that our algorithm which is on-policy is convergent. Numerical results on a multi-stage stochastic shortest path problem show that our algorithm exhibits significantly better performance and is more robust as compared to Q-learning. Future work would be to compare it with other policy-based reinforcement learning algorithms. Finally, we develop an online actor-critic reinforcement learning algorithm with function approximation for a problem of control under inequality constraints. We consider the long-run average cost Markov decision process(MDP) framework in which both the objective and the constraint functions are suitable policy-dependent long-run averages of certain sample path functions. The Lagrange multiplier method is used to handle the inequality constraints. We prove the asymptotic almost sure convergence of our algorithm to a locally optimal solution. We also provide the results of numerical experiments on a problem of routing in a multistage queueing network with constraints on long-run average queue lengths. We observe that our algorithm exhibits good performance on this setting and converges to a feasible point.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Mondal, Akash. "Stochastic Optimization And Its Application In Reinforcement Learning". Thesis, 2022. https://etd.iisc.ac.in/handle/2005/6086.

Texto completo da fonte
Resumo:
Numerous engineering fields, such as transportation systems, manufacturing, communication networks, healthcare, and finance, frequently encounter problems requiring optimization in the presence of uncertainty. Simulation-based optimization is a workable substitute for accurate analytical solutions because of the numerous input variables and the need for a system model. Smoothed functional (SF) algorithms belong to the class of simultaneous perturbation methods that have been found useful for stochastic optimization problems, particularly in high-dimensional parameter spaces. SF methods update the gradient of the objective using function measurements involving parameters that are perturbed simultaneously along all component directions. \cite{katkul} originally developed the SF gradient procedure. This results in the objective function getting smoothed because of the convolution. The objective function smoothing that results from the convolution with a smoothing density function can help the algorithm converge to a global minimum or a point close to it. First, we present a stochastic gradient algorithm for minimizing a smooth objective function that is an expectation over noisy cost samples and only the latter are observed for any given parameter. Our algorithm employs a gradient estimation scheme with random perturbations, which are formed using the truncated Cauchy distribution from the $\delta$ sphere. We analyze the bias and variance of the proposed gradient estimator. Our algorithm is found to be particularly useful in the case when the objective function is non-convex and the parameter dimension is high. From an asymptotic convergence analysis, we establish that our algorithm converges almost surely to the set of stationary points of the objective function and obtains the asymptotic convergence rate. We also show that our algorithm avoids unstable equilibria, implying convergence to local minima. Further, we perform a non-asymptotic convergence analysis of our algorithm. In particular, we establish here a non-asymptotic bound for finding an $\epsilon$-stationary point of the non-convex objective function. Finally, we demonstrate numerically through simulations that our algorithm outperforms GSF, SPSA, and RDSA by a significant margin over a few non-convex settings, and we further validate its performance over convex (noisy) objectives. Next, we consider the problem of control in the setting of reinforcement learning (RL), where model information is not available. Policy gradient algorithms are a popular solution approach for this problem and are usually shown to converge to a stationary point of the value function. We propose two policy Newton algorithms that incorporate cubic regularization. Both algorithms employ the likelihood ratio method to form estimates of the gradient and Hessian of the value function using sample trajectories. The first algorithm requires an exact solution of the cubic regularized problem in each iteration, while the second algorithm employs an efficient gradient descent-based approximation to the cubic regularized problem. We establish convergence of our proposed algorithms to a second-order stationary point (SOSP) of the value function, which results in the avoidance of traps in the form of saddle points. In particular, the sample complexity of our algorithms towards finding an $\epsilon$-SOSP is $\mathcal{O}(\epsilon^{-3.5})$, and this is a significant improvement over the previous state-of-the-art sample complexity of $\mathcal{O}(\epsilon^{-4.5})$.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Gupta, Saurabh. "Development Of Deterministic And Stochastic Algorithms For Inverse Problems Of Optical Tomography". Thesis, 2013. https://etd.iisc.ac.in/handle/2005/2608.

Texto completo da fonte
Resumo:
Stable and computationally efficient reconstruction methodologies are developed to solve two important medical imaging problems which use near-infrared (NIR) light as the source of interrogation, namely, diffuse optical tomography (DOT) and one of its variations, ultrasound-modulated optical tomography (UMOT). Since in both these imaging modalities the system matrices are ill-conditioned owing to insufficient and noisy data, the emphasis in this work is to develop robust stochastic filtering algorithms which can handle measurement noise and also account for inaccuracies in forward models through an appropriate assignment of a process noise. However, we start with demonstration of speeding of a Gauss-Newton (GN) algorithm for DOT so that a video-rate reconstruction from data recorded on a CCD camera is rendered feasible. Towards this, a computationally efficient linear iterative scheme is proposed to invert the normal equation of a Gauss-Newton scheme in the context of recovery of absorption coefficient distribution from DOT data, which involved the singular value decomposition (SVD) of the Jacobian matrix appearing in the update equation. This has sufficiently speeded up the inversion that a video rate recovery of time evolving absorption coefficient distribution is demonstrated from experimental data. The SVD-based algorithm has made the number of operations in image reconstruction to be rather than. 2()ONN3()ONN The rest of the algorithms are based on different forms of stochastic filtering wherein we arrive at a mean-square estimate of the parameters through computing their joint probability distributions conditioned on the measurement up to the current instant. Under this, the first algorithm developed uses a Bootstrap particle filter which also uses a quasi-Newton direction within. Since keeping track of the Newton direction necessitates repetitive computation of the Jacobian, for all particle locations and for all time steps, to make the recovery computationally feasible, we devised a faster update of the Jacobian. It is demonstrated, through analytical reasoning and numerical simulations, that the proposed scheme, not only accelerates convergence but also yields substantially reduced sample variance in the estimates vis-à-vis the conventional BS filter. Both accelerated convergence and reduced sample variance in the estimates are demonstrated in DOT optical parameter recovery using simulated and experimental data. In the next demonstration a derivative free variant of the pseudo-dynamic ensemble Kalman filter (PD-EnKF) is developed for DOT wherein the size of the unknown parameter is reduced by representing of the inhomogeneities through simple geometrical shapes. Also the optical parameter fields within the inhomogeneities are approximated via an expansion based on the circular harmonics (CH) (Fourier basis functions). The EnKF is then used to recover the coefficients in the expansion with both simulated and experimentally obtained photon fluence data on phantoms with inhomogeneous inclusions. The process and measurement equations in the Pseudo-Dynamic EnKF (PD-EnKF) presently yield a parsimonious representation of the filter variables, which consist of only the Fourier coefficients and the constant scalar parameter value within the inclusion. Using fictitious, low-intensity Wiener noise processes in suitably constructed ‘measurement’ equations, the filter variables are treated as pseudo-stochastic processes so that their recovery within a stochastic filtering framework is made possible. In our numerical simulations we have considered both elliptical inclusions (two inhomogeneities) and those with more complex shapes ( such as an annular ring and a dumbbell) in 2-D objects which are cross-sections of a cylinder with background absorption and (reduced) scattering coefficient chosen as = 0.01 mm-1 and = 1.0 mm-1respectively. We also assume=0.02 mm-1 within the inhomogeneity (for the single inhomogeneity case) and=0.02 and 0.03 mm-1 (for the two inhomogeneities case). The reconstruction results by the PD-EnKF are shown to be consistently superior to those through a deterministic and explicitly regularized Gauss-Newton algorithm. We have also estimated the unknown from experimentally gathered fluence data and verified the reconstruction by matching the experimental data with the computed one. The superiority of a modified version of the PD-EnKF, which uses an ensemble square root filter, is also demonstrated in the context of UMOT by recovering the distribution of mean-squared amplitude of vibration, related to the Young’s modulus, in the ultrasound focal volume. Since the ability of a coherent light probe to pick-up the overall optical path-length change is limited to modulo an optical wavelength, the individual displacements suffered owing to the US forcing should be very small, say within a few angstroms. The sensitivity of modulation depth to changes in these small displacements could be very small, especially when the ROI is far removed from the source and detector. The contrast recovery of the unknown distribution in such cases could be seriously impaired whilst using a quasi-Newton scheme (e.g. the GN scheme) which crucially makes use of the derivative information. The derivative-free gain-based Monte Carlo filter not only remedies this deficiency, but also provides a regularization insensitive and computationally competitive alternative to the GN scheme. The inherent ability of a stochastic filter in accommodating the model error owing to a diffusion approximation of the correlation transport may be cited as an added advantage in the context of the UMOT inverse problem. Finally to speed up forward solve of the partial differential equation (PDE) modeling photon transport in the context of UMOT for which the PDE has time as a parameter, a spectral decomposition of the PDE operator is demonstrated. This allows the computation of the time dependent forward solution in terms of the eigen functions of the PDE operator which has speeded up the forward solution, which in turn has rendered the UMOT parameter recovery computationally efficient.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Gupta, Saurabh. "Development Of Deterministic And Stochastic Algorithms For Inverse Problems Of Optical Tomography". Thesis, 2013. http://etd.iisc.ernet.in/handle/2005/2608.

Texto completo da fonte
Resumo:
Stable and computationally efficient reconstruction methodologies are developed to solve two important medical imaging problems which use near-infrared (NIR) light as the source of interrogation, namely, diffuse optical tomography (DOT) and one of its variations, ultrasound-modulated optical tomography (UMOT). Since in both these imaging modalities the system matrices are ill-conditioned owing to insufficient and noisy data, the emphasis in this work is to develop robust stochastic filtering algorithms which can handle measurement noise and also account for inaccuracies in forward models through an appropriate assignment of a process noise. However, we start with demonstration of speeding of a Gauss-Newton (GN) algorithm for DOT so that a video-rate reconstruction from data recorded on a CCD camera is rendered feasible. Towards this, a computationally efficient linear iterative scheme is proposed to invert the normal equation of a Gauss-Newton scheme in the context of recovery of absorption coefficient distribution from DOT data, which involved the singular value decomposition (SVD) of the Jacobian matrix appearing in the update equation. This has sufficiently speeded up the inversion that a video rate recovery of time evolving absorption coefficient distribution is demonstrated from experimental data. The SVD-based algorithm has made the number of operations in image reconstruction to be rather than. 2()ONN3()ONN The rest of the algorithms are based on different forms of stochastic filtering wherein we arrive at a mean-square estimate of the parameters through computing their joint probability distributions conditioned on the measurement up to the current instant. Under this, the first algorithm developed uses a Bootstrap particle filter which also uses a quasi-Newton direction within. Since keeping track of the Newton direction necessitates repetitive computation of the Jacobian, for all particle locations and for all time steps, to make the recovery computationally feasible, we devised a faster update of the Jacobian. It is demonstrated, through analytical reasoning and numerical simulations, that the proposed scheme, not only accelerates convergence but also yields substantially reduced sample variance in the estimates vis-à-vis the conventional BS filter. Both accelerated convergence and reduced sample variance in the estimates are demonstrated in DOT optical parameter recovery using simulated and experimental data. In the next demonstration a derivative free variant of the pseudo-dynamic ensemble Kalman filter (PD-EnKF) is developed for DOT wherein the size of the unknown parameter is reduced by representing of the inhomogeneities through simple geometrical shapes. Also the optical parameter fields within the inhomogeneities are approximated via an expansion based on the circular harmonics (CH) (Fourier basis functions). The EnKF is then used to recover the coefficients in the expansion with both simulated and experimentally obtained photon fluence data on phantoms with inhomogeneous inclusions. The process and measurement equations in the Pseudo-Dynamic EnKF (PD-EnKF) presently yield a parsimonious representation of the filter variables, which consist of only the Fourier coefficients and the constant scalar parameter value within the inclusion. Using fictitious, low-intensity Wiener noise processes in suitably constructed ‘measurement’ equations, the filter variables are treated as pseudo-stochastic processes so that their recovery within a stochastic filtering framework is made possible. In our numerical simulations we have considered both elliptical inclusions (two inhomogeneities) and those with more complex shapes ( such as an annular ring and a dumbbell) in 2-D objects which are cross-sections of a cylinder with background absorption and (reduced) scattering coefficient chosen as = 0.01 mm-1 and = 1.0 mm-1respectively. We also assume=0.02 mm-1 within the inhomogeneity (for the single inhomogeneity case) and=0.02 and 0.03 mm-1 (for the two inhomogeneities case). The reconstruction results by the PD-EnKF are shown to be consistently superior to those through a deterministic and explicitly regularized Gauss-Newton algorithm. We have also estimated the unknown from experimentally gathered fluence data and verified the reconstruction by matching the experimental data with the computed one. The superiority of a modified version of the PD-EnKF, which uses an ensemble square root filter, is also demonstrated in the context of UMOT by recovering the distribution of mean-squared amplitude of vibration, related to the Young’s modulus, in the ultrasound focal volume. Since the ability of a coherent light probe to pick-up the overall optical path-length change is limited to modulo an optical wavelength, the individual displacements suffered owing to the US forcing should be very small, say within a few angstroms. The sensitivity of modulation depth to changes in these small displacements could be very small, especially when the ROI is far removed from the source and detector. The contrast recovery of the unknown distribution in such cases could be seriously impaired whilst using a quasi-Newton scheme (e.g. the GN scheme) which crucially makes use of the derivative information. The derivative-free gain-based Monte Carlo filter not only remedies this deficiency, but also provides a regularization insensitive and computationally competitive alternative to the GN scheme. The inherent ability of a stochastic filter in accommodating the model error owing to a diffusion approximation of the correlation transport may be cited as an added advantage in the context of the UMOT inverse problem. Finally to speed up forward solve of the partial differential equation (PDE) modeling photon transport in the context of UMOT for which the PDE has time as a parameter, a spectral decomposition of the PDE operator is demonstrated. This allows the computation of the time dependent forward solution in terms of the eigen functions of the PDE operator which has speeded up the forward solution, which in turn has rendered the UMOT parameter recovery computationally efficient.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Martin, James Robert Ph D. "A computational framework for the solution of infinite-dimensional Bayesian statistical inverse problems with application to global seismic inversion". Thesis, 2015. http://hdl.handle.net/2152/31374.

Texto completo da fonte
Resumo:
Quantifying uncertainties in large-scale forward and inverse PDE simulations has emerged as a central challenge facing the field of computational science and engineering. The promise of modeling and simulation for prediction, design, and control cannot be fully realized unless uncertainties in models are rigorously quantified, since this uncertainty can potentially overwhelm the computed result. While statistical inverse problems can be solved today for smaller models with a handful of uncertain parameters, this task is computationally intractable using contemporary algorithms for complex systems characterized by large-scale simulations and high-dimensional parameter spaces. In this dissertation, I address issues regarding the theoretical formulation, numerical approximation, and algorithms for solution of infinite-dimensional Bayesian statistical inverse problems, and apply the entire framework to a problem in global seismic wave propagation. Classical (deterministic) approaches to solving inverse problems attempt to recover the “best-fit” parameters that match given observation data, as measured in a particular metric. In the statistical inverse problem, we go one step further to return not only a point estimate of the best medium properties, but also a complete statistical description of the uncertain parameters. The result is a posterior probability distribution that describes our state of knowledge after learning from the available data, and provides a complete description of parameter uncertainty. In this dissertation, a computational framework for such problems is described that wraps around the existing forward solvers, as long as they are appropriately equipped, for a given physical problem. Then a collection of tools, insights and numerical methods may be applied to solve the problem, and interrogate the resulting posterior distribution, which describes our final state of knowledge. We demonstrate the framework with numerical examples, including inference of a heterogeneous compressional wavespeed field for a problem in global seismic wave propagation with 10⁶ parameters.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Stochastic Newton algorithms"

1

Bhatnagar, S., H. Prasad e L. Prashanth. "Newton-Based Smoothed Functional Algorithms". In Stochastic Recursive Algorithms for Optimization, 133–48. London: Springer London, 2013. http://dx.doi.org/10.1007/978-1-4471-4285-0_8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Bhatnagar, S., H. Prasad e L. Prashanth. "Newton-Based Simultaneous Perturbation Stochastic Approximation". In Stochastic Recursive Algorithms for Optimization, 105–31. London: Springer London, 2013. http://dx.doi.org/10.1007/978-1-4471-4285-0_7.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

He, Sailing, Staffan Strom e Vaughan H. Weston. "Wave-Splittings Combined With Optimization Techniques". In Time Domain Wave-Splittings and Inverse Problems, 185–228. Oxford University PressOxford, 1998. http://dx.doi.org/10.1093/oso/9780198565499.003.0005.

Texto completo da fonte
Resumo:
Abstract Recent developments and applications of various optimization methods have provided efficient tools for obtaining numerical solutions to various types of inverse problems (see, e.g. Frank and Balanis (1987), Kleinman and van den Berg (1990), Chew and Otto (1992), He and Kabanikhin (1995)). Optimization methods can be grouped into two types, namely, global search methods and gradient search methods. Global search methods include simulated annealing (see, e.g. Gamero et al. (1991)), neural network methods (see, e.g. Lu and Berryman (1990)), and genetic algorithms (see, e.g. Weile and Michielssen (1997)). A global search method is based on a stochastic algorithm, and its convergence can be very slow. A gradient search method is usually based on a deterministic algorithm, and it converges rapidly (though it may converge to a local minimum). In this chapter, wave-splittings are combined with gradient search optimization techniques. The procedure of deriving an explicit expression for the gradient of an objective functional through dual functions is described. The conjugate gradient method is used to reconstruct the line parameters, or spatial source distributions on a non-uniform line. The time domain inverse problem of reconstructing the temporal behaviours of a bi-isotropic slab is also considered, and a matrix formalism is introduced to derive an explicit expression for the gradient. A multi-dimensional acoustic inverse problem is also considered, and numerical reconstruction of a two-dimensional density profile is given. The Newton-Kantorovich procedure with a wave-splitting process is applied to a time domain inverse problem for a quasi-linear acoustic wave equation. Finally, a measurement system is described and the measurement data are used to reconstruct the susceptibility kernel of a polar liquid (1-buthanol).
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Arsham, Hossein, e Shaya Sheikh. "Organizational Performance-Design Process". In Advances in Business Information Systems and Analytics, 54–84. IGI Global, 2015. http://dx.doi.org/10.4018/978-1-4666-7272-7.ch005.

Texto completo da fonte
Resumo:
Inverse simulation involves finding the control inputs required to achieve a particular performance measure. The designer simulates the process numerically by varying the controllable input for generating desirable output. Clearly, this trial and error is not efficient and effective. This chapter proposes a “stochastic approximation” algorithm to estimate the necessary controllable input parameters within a desired accuracy given a target value for the performance function. The proposed algorithm is based on iterative Newton's method using a single-run simulation to minimize the expected loss function (i.e. risk function). The validity of the input parameter estimates are examined by applying it to some reliability and queuing systems with known analytical solutions. (Keywords: Performance management by simulation; prescriptive analysis for parameter setting; decision support for product and service design; data analysis and design; inverse business performance measure.)
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Jabari, Farkhondeh, Heresh Seyedia, Sajad Najafi Ravadanegh e Behnam Mohammadi Ivatloo. "Stochastic Contingency Analysis Based on Voltage Stability Assessment in Islanded Power System Considering Load Uncertainty Using MCS and k-PEM". In Advances in Computer and Electrical Engineering, 12–36. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9911-3.ch002.

Texto completo da fonte
Resumo:
Increased electricity demands and economic operation of large power systems in a deregulated environment with a limited investment in transmission expansion planning causes interconnected power grids to be operated closer to their stability limits. Meanwhile, the loads uncertainty will affect the static and dynamic stabilities. Therefore, if there is no emergency corrective control in time, occurrence of wide area contingency may lead to the catastrophic cascading outages. Studies show that many wide area blackouts which led to massive economic losses may have been prevented by a fast feasible controlled islanding decision making. This chapter introduces a novel computationally efficient approach for separating of bulk power system into several stable sections following a severe disturbance. The splitting strategy reduces the large initial search space to an interface boundary network considering coherency of synchronous generators and network graph simplification. Then, a novel islanding scenario generator algorithm denoted as BEM (Backward Elimination Method) based on PMEAs (Primary Maximum Expansion Areas) has been applied to generate all proper islanding solutions in the simplified network graph. The PPF (Probabilistic Power Flow) based on Newton-Raphson method and Q-V modal analysis has been used to evaluate the steady-state stability of created islands in each generated scenario. BICA (Binary Imperialistic Competitive Algorithm) has then been applied to minimize total load-generation mismatch considering integrity, voltage permitted range and steady-state voltage stability constraints. The best solution has then been applied to split the entire power network. A novel stochastic contingency analysis of islands based on PSVI (Probability of Static Voltage Instability) using MCS (Monte Carlo Simulation) and k-PEM (k-Point Estimate Method) have been proposed to identify the critical PQ buses and severe contingencies. In these approaches, the ITM (Inverse Transform Method) has been used to model uncertain loads with normal probability distribution function in optimal islanded power system. The robustness, effectiveness and capability of the proposed approaches have been validated on the New England 39-bus standard power system.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Stochastic Newton algorithms"

1

Graillat, Stef, Fabienne Jezequel, Enzo Queiros Martins e Maxime Spyropoulos. "Computing multiple roots of polynomials in stochastic arithmetic with Newton method and approximate GCD". In 2021 23rd International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC). IEEE, 2021. http://dx.doi.org/10.1109/synasc54541.2021.00020.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Arun, C. O., B. N. Rao e S. M. Sivakumar. "Stochastic Damage Growth Analysis Using EFGM". In ASME 2008 Pressure Vessels and Piping Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/pvp2008-61882.

Texto completo da fonte
Resumo:
A stochastic meshless method is presented for solving boundary-value problems in damage mechanics under elastic-plastic conditions that involves random material properties. Material is assumed to have an initial damage, which follows a lognormal filed. An isotropic unified damage law, formulated by Lemaitre is used in this study. A meshless formulation based on element free Galerkin method (EFGM) is developed to predict stochastic structural response. A scaled matrix approach is used for applying the essential boundary conditions in EFGM. The proposed method is based on perturbation technique. First order perturbation technique for damage growth in an elastic plastic analysis is formulated. Newton-Raphson algorithm is used to solve for material nonlinearity. A numerical example is solved to study the effect of random initial damage in the structural response and further damage growth. Monte Carlo technique is used to validate the proposed method.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Zhang, Shumao, Fahim Forouzanfar e Xiao-Hui Wu. "Stein Variational Gradient Descent for Reservoir History Matching Problems". In SPE Reservoir Simulation Conference. SPE, 2023. http://dx.doi.org/10.2118/212190-ms.

Texto completo da fonte
Resumo:
Abstract Reservoir history matching problem estimates the system (i.e., reservoir morel) parameters based on noisy observed data. Examples can be estimating the permeability and porosity fields from time series of oil, water, and gas production rates. The estimation of parameters is formulated in the form of estimating their probability distributions; it is a required step for reservoir management operation and planning under subsurface uncertainty. The Bayesian framework is commonly used to estimate the posterior distribution of parameters, which may contain multiple modes that correspond to distinct reservoir scenarios. Here, we study the application of Stein Variational Gradient Descent (SVGD) method, originally proposed by Liu & Wang (2016), in reservoir history matching problems. The rationale and mechanics of SVGD method is discussed and the adaptation of this method to the reservoir characterization application is presented. More specifically, we propose to formulate the gradient-based SVGD method using stochastic gradients for reservoir history matching applications. To the best of our knowledge, this paper presents the first application of SVGD method for reservoir characterization problem. The utilization of stochastic approximation of gradients within a gradient-based SVGD is another novelty aspect of this work. The formulated algorithm is benchmarked using synthetic test problems with multimodal known posterior distributions. Also, the application of the proposed algorithm is investigated to solve synthetic and real history matching problems including the IC Fault model and an unconventional well simulation model. The reservoir test problems are further investigated to evaluate the method's performance in comparison with application of implementations of a Gauss Newton optimization and an iterative Ensemble Smoother method for sampling the posterior distribution. We show that the proposed implementation of SVGD can capture the posterior distribution and complicated geometry. For reservoir IC Fault test problem, the method effectively samples multiple modes. For the unconventional test problem, the samples are compared with the ones obtained using a Gauss Newton or iterative Ensemble Smoother methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Eltahan, Esmail, Faruk Omer Alpak e Kamy Sepehrnoori. "A Quasi-Newton Method for Well Location Optimization Under Uncertainty". In SPE Reservoir Simulation Conference. SPE, 2023. http://dx.doi.org/10.2118/212212-ms.

Texto completo da fonte
Resumo:
Abstract Subsurface development involves well-placement decisions considering the highly uncertain understanding of the reservoir in the subsurface. The simultaneous optimization of a large number of well locations is a challenging problem. Conventional gradient-based methods are known to perform efficiently for well-placement optimization problems when such problems are translated into real-valued representations, and special noisy objective function handling protocols are implemented. However, applying such methods to large-scale problems may still be impractical because the gradients of the objective function may be too expensive to compute for realistic applications in the absence of the implementation of the adjoint method. In this paper, we develop a quasi-Newton method based on the stochastic simplex approximate gradient (StoSAG), which requires only objective-function values. We have implemented the BFGS quasi-Newton updating algorithm together with line-search and trust-region optimization strategies. We have developed a novel approach to enhance the accuracy of StoSAG gradients by modifying their formulations to enable exploiting the objective-function structure. The objective function is treated as a summation of element functions, each representing the contribution from an individual well at distinct time steps. Instead of working with a single value for the gradient, we treat it as a sum of sub-gradients. We then utilize problem-specific prior knowledge to form a matrix W that acts on the sub-gradients. The entries of W vary from 0 to 1 and are proportional to the interference effects the neighbouring wells have on each other. We define those entries (or weights) based on the radii of investigation around the wells. The BFGS-StoSAG variants are demonstrated on a realistic synthetic case with 26 wells while varying the average reservoir permeability. We first show that the BFGS algorithm delivers promising performance as in many cases it results in the most rapid improvement for the objective-function values (especially in early iterations). Further testing results confirm that the trust-region protocol is more effective than the line-search protocol for accelerating convergence with BFGS. Although the objective function is not always continuously differentiable with respect to well locations, the StoSAG variants overcome this challenge owing to their smoothing properties of approximate gradients. Moreover, we show that using our gradient correction procedures on the well-location optimization problem results in drastic acceleration in convergence indicating enhancement in the StoSAG gradient approximation quality.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Fang, X., e J. Tang. "Granular Damping Analysis Using a Direct Simulation Monte Carlo Approach". In ASME 2006 International Mechanical Engineering Congress and Exposition. ASMEDC, 2006. http://dx.doi.org/10.1115/imece2006-14448.

Texto completo da fonte
Resumo:
Granular damping, which possesses promising features for vibration suppression in harsh environment, has been studied using empirical analysis and more recently using the discrete element method (DEM). The mechanism of granular damping is highly nonlinear, and, when numerical analyses are performed, usually a relatively long simulation time of structural vibration is needed to reflect the damping behavior especially at low frequency range. The present research explores the granular damping analysis by means of the Direct Simulation Monte Carlo (DSMC) approach. Unlike the DEM that tracks the motion of granules using the direct numerical integration of Newton's equations, the DSMC is a statistical approach derived from the Boltzmann equation to describe the velocity evolution of the granular system. Since the exact time and locations of contacts among granules are not calculated in the DSMC, a significant reduction in computational time/cost can be achieved. While the DSMC has been exercised in a variety of granular systems, its implementation to granular damping analysis poses unique challenges. In this research, we develop a new method that enables the coupled analysis of the stochastic granular motion and the structural vibration. The complicated energy transfer and dissipation due to the collisions between the granules and the host structure and among the granules is directly and accurately incorporated into the analysis, which is essential to damping evaluation. Also, the effects of granular packing ratio and the excluded volume of granules, which may not be included in conventional DSMC method, are explicitly taken into account in the proposed approach. A series of numerical analyses are performed to highlight the accuracy and efficiency of the new approach. Using this new algorithm, we can carry out parametric analysis on granular damping to obtain guidelines for system optimization.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia