Letteratura scientifica selezionata sul tema "Randomized iterative methods"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Randomized iterative methods".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Randomized iterative methods":

1

Gower, Robert M., e Peter Richtárik. "Randomized Iterative Methods for Linear Systems". SIAM Journal on Matrix Analysis and Applications 36, n. 4 (gennaio 2015): 1660–90. http://dx.doi.org/10.1137/15m1025487.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Loizou, Nicolas, e Peter Richtárik. "Convergence Analysis of Inexact Randomized Iterative Methods". SIAM Journal on Scientific Computing 42, n. 6 (gennaio 2020): A3979—A4016. http://dx.doi.org/10.1137/19m125248x.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Xing, Lili, Wendi Bao, Ying Lv, Zhiwei Guo e Weiguo Li. "Randomized Block Kaczmarz Methods for Inner Inverses of a Matrix". Mathematics 12, n. 3 (2 febbraio 2024): 475. http://dx.doi.org/10.3390/math12030475.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this paper, two randomized block Kaczmarz methods to compute inner inverses of any rectangular matrix A are presented. These are iterative methods without matrix multiplications and their convergence is proved. The numerical results show that the proposed methods are more efficient than iterative methods involving matrix multiplications for the high-dimensional matrix.
4

Zhao, Jing, Xiang Wang e Jianhua Zhang. "Randomized average block iterative methods for solving factorised linear systems". Filomat 37, n. 14 (2023): 4603–20. http://dx.doi.org/10.2298/fil2314603z.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Recently, some randomized iterative methods are proposed to solve large-scale factorised linear systems. In this paper, we present two randomized average block iterative methods which still take advantage of the factored form and need not perform the entire matrix. The new methods are pseudoinverse-free and can be implemented for parallel computation. Furthermore, we analyze their convergence behaviors and obtain the exponential convergence rate. Finally, some numerical examples are carried out to show the effectiveness of our new methods.
5

Zhang, Yanjun, e Hanyu Li. "Splitting-based randomized iterative methods for solving indefinite least squares problem". Applied Mathematics and Computation 446 (giugno 2023): 127892. http://dx.doi.org/10.1016/j.amc.2023.127892.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Yunak, O., M. Klymash, O. Shpur e V. Mrak. "MATHEMATICAL MODEL OF FRACTAL STRUCTURES RECOGNITION USING NEURAL NETWORK TECHNOLOGY". Information and communication technologies, electronic engineering 3, n. 1 (giugno 2023): 1–9. http://dx.doi.org/10.23939/ictee2023.01.001.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The article goes about the methods of training a neural network to recognize fractal structures with the rotation of iteration elements by means of an improved randomized system of iteration functions. Parameters of fractal structures are used to calculate complex parameters of physical phenomena. They are an effective tool in scientific works and used to calculate quantitative indicators in technical tasks. The calculation of these parameters is a very difficult mathematical problem. This is caused by the fact that it is very difficult to describe the mathematical model of the fractal image, it is difficult to determine the parameters of the iterative functions. The neural network learning will allow you to quickly determine the parameters of the first iterations of the fractal based on the finished fractal image and basing on them to determine the parameters of the iterative functions. The improved system of randomized iterative functions (SRIF) will allow to describe the mathematical process and to develop the software for generating fractal structures with the possibility of rotating elements of iterations. In its turn, this will make it possible to form an array of data for training a neural network. The trained neural network will be able to determine the parameters of the figures of the first iterations by means of which it will be possible to build a system of iterative functions. It will help to reproduce a fractal structure qualitatively. This approach can be used for three-dimensional fractal structures. After setting the parameters of the first iterations of the fractal, it will be possible to determine the geometric structure which is the basis of the fractal structure. In the future, this approach may be included in the system for recognizing objects under fractal structures, for example, under masking nets.
7

Sabelfeld, Karl K. "Randomized Monte Carlo algorithms for matrix iterations and solving large systems of linear equations". Monte Carlo Methods and Applications 28, n. 2 (31 maggio 2022): 125–33. http://dx.doi.org/10.1515/mcma-2022-2114.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract Randomized scalable vector algorithms for calculation of matrix iterations and solving extremely large linear algebraic equations are developed. Among applications presented in this paper are randomized iterative methods for large linear systems of algebraic equations governed by M-matrices. The crucial idea of the randomized method is that the iterations are performed by sampling random columns only, thus avoiding not only matrix-matrix but also matrix-vector multiplications. The suggested vector randomized methods are highly efficient for solving linear equations of high dimension, the computational cost depends only linearly on the dimension.
8

Popkov, Yuri S., Yuri A. Dubnov e Alexey Yu Popkov. "Reinforcement Procedure for Randomized Machine Learning". Mathematics 11, n. 17 (23 agosto 2023): 3651. http://dx.doi.org/10.3390/math11173651.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper is devoted to problem-oriented reinforcement methods for the numerical implementation of Randomized Machine Learning. We have developed a scheme of the reinforcement procedure based on the agent approach and Bellman’s optimality principle. This procedure ensures strictly monotonic properties of a sequence of local records in the iterative computational procedure of the learning process. The dependences of the dimensions of the neighborhood of the global minimum and the probability of its achievement on the parameters of the algorithm are determined. The convergence of the algorithm with the indicated probability to the neighborhood of the global minimum is proved.
9

Xing, Lili, Wendi Bao e Weiguo Li. "On the Convergence of the Randomized Block Kaczmarz Algorithm for Solving a Matrix Equation". Mathematics 11, n. 21 (5 novembre 2023): 4554. http://dx.doi.org/10.3390/math11214554.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
A randomized block Kaczmarz method and a randomized extended block Kaczmarz method are proposed for solving the matrix equation AXB=C, where the matrices A and B may be full-rank or rank-deficient. These methods are iterative methods without matrix multiplication, and are especially suitable for solving large-scale matrix equations. It is theoretically proved that these methods converge to the solution or least-square solution of the matrix equation. The numerical results show that these methods are more efficient than the existing algorithms for high-dimensional matrix equations.
10

Shcherbakova, Elena M., Sergey A. Matveev, Alexander P. Smirnov e Eugene E. Tyrtyshnikov. "Study of performance of low-rank nonnegative tensor factorization methods". Russian Journal of Numerical Analysis and Mathematical Modelling 38, n. 4 (1 agosto 2023): 231–39. http://dx.doi.org/10.1515/rnam-2023-0018.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract In the present paper we compare two different iterative approaches to constructing nonnegative tensor train and Tucker decompositions. The first approach is based on idea of alternating projections and randomized sketching for factorization of tensors with nonnegative elements. This approach can be useful for both TT and Tucker formats. The second approach consists of two stages. At the first stage we find the unconstrained tensor train decomposition for the target array. At the second stage we use this initial approximation in order to fix it within moderate number of operations and obtain the factorization with nonnegative factors either in tensor train or Tucker model. We study the performance of these methods for both synthetic data and hyper-spectral image and demonstrate the clear advantage of the latter technique in terms of computational time and wider range of possible applications.

Tesi sul tema "Randomized iterative methods":

1

Gower, Robert Mansel. "Sketch and project : randomized iterative methods for linear systems and inverting matrices". Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/20989.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Probabilistic ideas and tools have recently begun to permeate into several fields where they had traditionally not played a major role, including fields such as numerical linear algebra and optimization. One of the key ways in which these ideas influence these fields is via the development and analysis of randomized algorithms for solving standard and new problems of these fields. Such methods are typically easier to analyze, and often lead to faster and/or more scalable and versatile methods in practice. This thesis explores the design and analysis of new randomized iterative methods for solving linear systems and inverting matrices. The methods are based on a novel sketch-and-project framework. By sketching we mean, to start with a difficult problem and then randomly generate a simple problem that contains all the solutions of the original problem. After sketching the problem, we calculate the next iterate by projecting our current iterate onto the solution space of the sketched problem. The starting point for this thesis is the development of an archetype randomized method for solving linear systems. Our method has six different but equivalent interpretations: sketch-and-project, constrain-and-approximate, random intersect, random linear solve, random update and random fixed point. By varying its two parameters – a positive definite matrix (defining geometry), and a random matrix (sampled in an i.i.d. fashion in each iteration) – we recover a comprehensive array of well known algorithms as special cases, including the randomized Kaczmarz method, randomized Newton method, randomized coordinate descent method and random Gaussian pursuit. We also naturally obtain variants of all these methods using blocks and importance sampling. However, our method allows for a much wider selection of these two parameters, which leads to a number of new specific methods. We prove exponential convergence of the expected norm of the error in a single theorem, from which existing complexity results for known variants can be obtained. However, we also give an exact formula for the evolution of the expected iterates, which allows us to give lower bounds on the convergence rate. We then extend our problem to that of finding the projection of given vector onto the solution space of a linear system. For this we develop a new randomized iterative algorithm: stochastic dual ascent (SDA). The method is dual in nature, and iteratively solves the dual of the projection problem. The dual problem is a non-strongly concave quadratic maximization problem without constraints. In each iteration of SDA, a dual variable is updated by a carefully chosen point in a subspace spanned by the columns of a random matrix drawn independently from a fixed distribution. The distribution plays the role of a parameter of the method. Our complexity results hold for a wide family of distributions of random matrices, which opens the possibility to fine-tune the stochasticity of the method to particular applications. We prove that primal iterates associated with the dual process converge to the projection exponentially fast in expectation, and give a formula and an insightful lower bound for the convergence rate. We also prove that the same rate applies to dual function values, primal function values and the duality gap. Unlike traditional iterative methods, SDA converges under virtually no additional assumptions on the system (e.g., rank, diagonal dominance) beyond consistency. In fact, our lower bound improves as the rank of the system matrix drops. By mapping our dual algorithm to a primal process, we uncover that the SDA method is the dual method with respect to the sketch-and-project method from the previous chapter. Thus our new more general convergence results for SDA carry over to the sketch-and-project method and all its specializations (randomized Kaczmarz, randomized coordinate descent ... etc.). When our method specializes to a known algorithm, we either recover the best known rates, or improve upon them. Finally, we show that the framework can be applied to the distributed average consensus problem to obtain an array of new algorithms. The randomized gossip algorithm arises as a special case. In the final chapter, we extend our method for solving linear system to inverting matrices, and develop a family of methods with specialized variants that maintain symmetry or positive definiteness of the iterates. All the methods in the family converge globally and exponentially, with explicit rates. In special cases, we obtain stochastic block variants of several quasi-Newton updates, including bad Broyden (BB), good Broyden (GB), Powell-symmetric-Broyden (PSB), Davidon-Fletcher-Powell (DFP) and Broyden-Fletcher-Goldfarb-Shanno (BFGS). Ours are the first stochastic versions of these updates shown to converge to an inverse of a fixed matrix. Through a dual viewpoint we uncover a fundamental link between quasi-Newton updates and approximate inverse preconditioning. Further, we develop an adaptive variant of the randomized block BFGS (AdaRBFGS), where we modify the distribution underlying the stochasticity of the method throughout the iterative process to achieve faster convergence. By inverting several matrices from varied applications, we demonstrate that AdaRBFGS is highly competitive when compared to the well established Newton-Schulz and approximate preconditioning methods. In particular, on large-scale problems our method outperforms the standard methods by orders of magnitude. The development of efficient methods for estimating the inverse of very large matrices is a much needed tool for preconditioning and variable metric methods in the big data era.
2

Bai, Xianglan. "Non-Krylov Non-iterative Subspace Methods For Linear Discrete Ill-posed Problems". Kent State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=kent1627042947894919.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

UGWU, UGOCHUKWU OBINNA. "Iterative tensor factorization based on Krylov subspace-type methods with applications to image processing". Kent State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=kent1633531487559183.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Gazagnadou, Nidham. "Expected smoothness for stochastic variance-reduced methods and sketch-and-project methods for structured linear systems". Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT035.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
L'augmentation considérable du volume de données ainsi que de la taille des échantillons complexifie la phase d'optimisation des algorithmes d'apprentissage, nécessitant la minimisation d'une fonction de perte. La descente de gradient stochastique (SGD) et ses variantes à réduction de variance (SAGA, SVRG, MISO) sont largement utilisées pour résoudre ces problèmes. En pratique, ces méthodes sont accélérées en calculant des gradients stochastiques sur un "mini-batch" : un petit groupe d'échantillons tiré aléatoirement. En effet, les récentes améliorations technologiques permettant la parallélisation de ces calculs ont généralisé l'utilisation des mini-batchs.Dans cette thèse, nous nous intéressons à l'étude d'algorithmes du gradient stochastique à variance réduite en essayant d'en trouver les hyperparamètres optimaux: taille du pas et du mini-batch. Cette étude nous permet de donner des résultats de convergence interpolant entre celui des méthodes stochastiques tirant un seul échantillon par itération et la descente de gradient dite "full-batch" utilisant l'ensemble des échantillons disponibles à chaque itération. Notre analyse se base sur la constante de régularité moyenne, outil fondamental de notre analyse, qui permet de mesurer la régularité de la fonction aléatoire dont le gradient est calculé.Nous étudions un autre type d'algorithmes d'optimisation : les méthodes "sketch-and-project". Ces dernières peuvent être utilisées lorsque le problème d'apprentissage est équivalent à la résolution d'un système linéaire. C'est par exemple le cas des moindres carrés ou de la régression ridge. Nous analysons ici des variantes de cette méthode qui utilisent différentes stratégies de momentum et d'accélération. L'efficacité de ces méthodes dépend de la stratégie de "sketching" utilisée pour compresser l'information du système à résoudre, et ce, à chaque itération. Enfin, nous montrons que ces méthodes peuvent aussi être étendues à d'autres problèmes d'analyse numérique. En effet, l'extension des méthodes de sketch-and-project aux méthodes de direction alternée implicite (ADI) permet de les appliquer en grande dimension lorsque les solveurs classiques s'avèrent trop lents
The considerable increase in the number of data and features complicates the learning phase requiring the minimization of a loss function. Stochastic gradient descent (SGD) and variance reduction variants (SAGA, SVRG, MISO) are widely used to solve this problem. In practice, these methods are accelerated by computing these stochastic gradients on a "mini-batch": a small group of samples randomly drawn.Indeed, recent technological improvements allowing the parallelization of these calculations have generalized the use of mini-batches.In this thesis, we are interested in the study of variants of stochastic gradient algorithms with reduced variance by trying to find the optimal hyperparameters: step and mini-batch size. Our study allows us to give convergence results interpolating between stochastic methods drawing a single sample per iteration and the so-called "full-batch" gradient descent using all samples at each iteration. Our analysis is based on the expected smoothness constant which allows to capture the regularity of the random function whose gradient is calculated.We study another class of optimization algorithms: the "sketch-and-project" methods. These methods can also be applied as soon as the learning problem boils down to solving a linear system. This is the case of ridge regression. We analyze here variants of this method that use different strategies of momentum and acceleration. These methods also depend on the sketching strategy used to compress the information of the system to be solved at each iteration. Finally, we show that these methods can also be extended to numerical analysis problems. Indeed, the extension of sketch-and-project methods to Alternating-Direction Implicit (ADI) methods allows to apply them to large-scale problems, when the so-called "direct" solvers are too slow
5

Wu, Wei. "Paving the Randomized Gauss-Seidel". Scholarship @ Claremont, 2017. http://scholarship.claremont.edu/scripps_theses/1074.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The Randomized Gauss-Seidel Method (RGS) is an iterative algorithm that solves overdetermined systems of linear equations Ax = b. This paper studies an update on the RGS method, the Randomized Block Gauss-Seidel Method. At each step, the algorithm greedily minimizes the objective function L(x) = kAx bk2 with respect to a subset of coordinates. This paper describes a Randomized Block Gauss-Seidel Method (RBGS) which uses a randomized control method to choose a subset at each step. This algorithm is the first block RGS method with an expected linear convergence rate which can be described by the properties of the matrix A and its column submatrices. The analysis demonstrates that RBGS improves RGS more when given appropriate column-paving of the matrix, a partition of the columns into well-conditioned blocks. The main result yields a RBGS method that is more e cient than the simple RGS method.

Capitoli di libri sul tema "Randomized iterative methods":

1

Azzam, Joy, Benjamin W. Ong e Allan A. Struthers. "Randomized Iterative Methods for Matrix Approximation". In Machine Learning, Optimization, and Data Science, 226–40. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95470-3_17.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Zhao, Xuefang. "A Randomized Iterative Approach for SV Discovery with SVelter". In Methods in Molecular Biology, 169–77. New York, NY: Springer New York, 2018. http://dx.doi.org/10.1007/978-1-4939-8666-8_13.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Márquez, Airam Expósito, e Christopher Expósito-Izquierdo. "An Overview of the Last Advances and Applications of Greedy Randomized Adaptive Search Procedure". In Advances in Computational Intelligence and Robotics, 264–84. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-2857-9.ch013.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
One of the most studied methods to get approximate solutions in optimization problems are the heuristics methods. Heuristics are usually employed to find good, but not necessarily optima solutions. The primary purpose of the chapter at hand is to provide a survey of the Greedy Randomized Adaptive Search Procedures (GRASP). GRASP is an iterative multi-start metaheuristic for solving complex optimization problems. Each GRASP iteration consists of a construction phase followed by a local search procedure. In this paper, we first describe the basic components of GRASP and the various elements that compose it. We present different variations of the basic GRASP in order to improve its performance. The GRASP has encompassed a wide range of applications, covering different fields because of its robustness and easy to apply.
4

Inchausti, Pablo. "The Generalized Linear Model". In Statistical Modeling With R, 189–200. Oxford University PressOxford, 2022. http://dx.doi.org/10.1093/oso/9780192859013.003.0008.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract This chapter introduces the three components of a generalized linear model (GLM): the linear predictor, the link function, and the probability function. It discusses the exponential dispersion family as a generator model for GLMs in a large sense. It sketches the fitting of a GLM with the iteratively weighted least squares algorithm for maximum likelihood in the frequentist framework. It introduces the main methods for assessing the effects of explanatory variables in frequentist GLMs (the Wald and likelihood ratio tests), the use of deviance as a measure of lack of model fit in GLMs, and the main types of residuals (Pearson, deviance, and randomized quantile) used in GLM model validation. It also discusses Bayesian fitting of GLMs, and some issues involved in defining priors for the GLM parameters.

Atti di convegni sul tema "Randomized iterative methods":

1

Ding, Liyong, Enbin Song e Yunmin Zhu. "Accelerate randomized coordinate descent iterative hard thresholding methods for ℓ0 regularized convex problems". In 2016 35th Chinese Control Conference (CCC). IEEE, 2016. http://dx.doi.org/10.1109/chicc.2016.7553791.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Carr, Steven, Nils Jansen e Ufuk Topcu. "Verifiable RNN-Based Policies for POMDPs Under Temporal Logic Constraints". In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/570.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Recurrent neural networks (RNNs) have emerged as an effective representation of control policies in sequential decision-making problems. However, a major drawback in the application of RNN-based policies is the difficulty in providing formal guarantees on the satisfaction of behavioral specifications, e.g. safety and/or reachability. By integrating techniques from formal methods and machine learning, we propose an approach to automatically extract a finite-state controller (FSC) from an RNN, which, when composed with a finite-state system model, is amenable to existing formal verification tools. Specifically, we introduce an iterative modification to the so-called quantized bottleneck insertion technique to create an FSC as a randomized policy with memory. For the cases in which the resulting FSC fails to satisfy the specification, verification generates diagnostic information. We utilize this information to either adjust the amount of memory in the extracted FSC or perform focused retraining of the RNN. While generally applicable, we detail the resulting iterative procedure in the context of policy synthesis for partially observable Markov decision processes (POMDPs), which is known to be notoriously hard. The numerical experiments show that the proposed approach outperforms traditional POMDP synthesis methods by 3 orders of magnitude within 2% of optimal benchmark values.
3

Jahani, Nazanin, Joaquín Ambía, Kristian Fossum, Sergey Alyaev, Erich Suter e Carlos Torres-Verdín. "REAL-TIME ENSEMBLE-BASED WELL-LOG INTERPRETATION FOR GEOSTEERING". In 2021 SPWLA 62nd Annual Logging Symposium Online. Society of Petrophysicists and Well Log Analysts, 2021. http://dx.doi.org/10.30632/spwla-2021-0105.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The cost of drilling wells on the Norwegian Continen-tal Shelf are extremely high, and hydrocarbon reservoirs are often located in spatially complex rock formations. Optimized well placement with real-time geosteering is crucial to efficiently produce from such reservoirs and reduce exploration and development costs. Geosteering is commonly assisted by repeated formation evaluation based on the interpretation of well logs while drilling. Thus, reliable computationally efficient and robust work-flows that can interpret well logs and capture uncertain-ties in real time are necessary for successful well place-ment. We present a formation evaluation workflow for geosteering that implements an iterative version of an ensemble-based method, namely the approximate Leven-berg Marquardt form of the Ensemble Randomized Max-imum Likelihood (LM-EnRML). The workflow jointly estimates the petrophysical and geological model param-eters and their uncertainties. In this paper the demon-strate joint estimation of layer-by-layer water saturation, porosity, and layer-boundary locations and inference of layers’ resistivities and densities. The parameters are estimated by minimizing the statistical misfit between the simulated and the observed measurements for several logs on different scales simultaneously (i.e., shallow-sensing nuclear density and shallow to extra-deep EM logs). Numerical experiments performed on a synthetic exam-ple verified that the iterative ensemble-based method can estimate multiple petrophysical parameters and decrease their uncertainties in a fraction of time compared to clas-sical Monte Carlo methods. Extra-deep EM measure-ments are known to provide the best reliable informa-tion for geosteering, and we show that they can be in-terpreted within the proposed workflow. However, we also observe that the parameter uncertainties noticeably decrease when deep-sensing EM logs are combined with shallow sensing nuclear density logs. Importantly the es-timation quality increases not only in the proximity of the shallow tool but also extends to the look ahead of the extra-deep EM capabilities. We specifically quantify how shallow data can lead to significant uncertainty re-duction of the boundary positions ahead of bit, which is crucial for geosteering decisions and reservoir mapping.
4

Wei He, Hongyan Zhang, Liangpei Zhang e Huanfeng Shen. "A noise-adjusted iterative randomized singular value decomposition method for hyperspectral image denoising". In IGARSS 2014 - 2014 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2014. http://dx.doi.org/10.1109/igarss.2014.6946731.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Feng, Xu, e Wenjian Yu. "A Fast Adaptive Randomized PCA Algorithm". In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/411.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
It is desirable to adaptively determine the number of dimensions (rank) for PCA according to a given tolerance of low-rank approximation error. In this work, we aim to develop a fast algorithm solving this adaptive PCA problem. We propose to replace the QR factorization in randQB_EI algorithm with matrix multiplication and inversion of small matrices, and propose a new error indicator to incrementally evaluate approximation error in Frobenius norm. Combining the shifted power iteration technique for better accuracy, we finally build up an algorithm named farPCA. Experimental results show that farPCA is much faster than the baseline methods (randQB_EI, randUBV and svds) in practical setting of multi-thread computing, while producing nearly optimal results of adpative PCA.
6

Kaushik, Harshal, e Farzad Yousefian. "A Randomized Block Coordinate Iterative Regularized Subgradient Method for High-dimensional Ill-posed Convex Optimization". In 2019 American Control Conference (ACC). IEEE, 2019. http://dx.doi.org/10.23919/acc.2019.8815256.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Buermann, Jan, e Jie Zhang. "Multi-Robot Adversarial Patrolling Strategies via Lattice Paths". In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/582.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In full-knowledge multi-robot adversarial patrolling, a group of robots have to detect an adversary who knows the robots' strategy. The adversary can easily take advantage of any deterministic patrolling strategy, which necessitates the employment of a randomised strategy. While the Markov decision process has been the dominant methodology in computing the penetration detection probabilities, we apply enumerative combinatorics to characterise the penetration detection probabilities. It allows us to provide the closed formulae of these probabilities and facilitates characterising optimal random defence strategies. Comparing to iteratively updating the Markov transition matrices, our methods significantly reduces the time and space complexity of solving the problem. We use this method to tackle four penetration configurations.
8

Xie, Jiarui, Chonghui Zhang, Lijun Sun e Yaoyao Fiona Zhao. "Fairness- and Uncertainty-Aware Data Generation for Data-Driven Design". In ASME 2023 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2023. http://dx.doi.org/10.1115/detc2023-114687.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract The design dataset is the backbone of data-driven design. Ideally, the dataset should be fairly distributed in both shape and property spaces to efficiently explore the underlying relationship. However, the classical experimental design focuses on shape diversity and thus yields biased exploration in the property space. Recently developed methods either conduct subset selection from a large dataset or employ assumptions with severe limitations. In this paper, fairness- and uncertainty-aware data generation (FairGen) is proposed to actively detect and generate missing properties starting from a small dataset. At each iteration, its coverage module computes the data coverage to guide the selection of the target properties. The uncertainty module ensures that the generative model can make certain and thus accurate shape predictions. Integrating the two modules, Bayesian optimization determines the target properties, which are thereafter fed into the generative model to predict the associated shapes. The new designs, whose properties are analyzed by simulation, are added to the design dataset. An S-slot design dataset case study was implemented to demonstrate the efficiency of FairGen in auxetic structural design. Compared with grid and randomized sampling, FairGen increased the coverage score at twice the speed and significantly expanded the sampled region in the property space. As a result, the generative models trained with FairGen-generated datasets showed consistent and significant reductions in mean absolute errors.
9

Gao, Guohua, Horacio Florez, Sean Jost, Shakir Shaikh, Kefei Wang, Jeroen Vink, Carl Blom, Terence Wells e Fredrik Saaf. "Implementation of Asynchronous Distributed Gauss-Newton Optimization Algorithms for Uncertainty Quantification by Conditioning to Production Data". In SPE Annual Technical Conference and Exhibition. SPE, 2022. http://dx.doi.org/10.2118/210118-ms.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract Previous implementation of distributed Gauss-Newton (DGN) optimization algorithm runs multiple optimization threads in parallel, employing a synchronous running mode (S-DGN). As a result, it waits for all simulations submitted in each iteration to complete, which may significantly degrade performance because a few simulations may run much longer than others, especially for time-consuming real-field cases. To overcome this limitation and thus improve the DGN optimizer's execution, we propose two asynchronous DGN (A-DGN) optimization algorithms in this paper. The A-DGN optimizer is a well-parallelized and efficient derivative-free (DFO) method. The A-DGN optimizer generates multiple initial guesses by sampling from the prior probability distribution of uncertain parameters in the first iteration. It then runs multiple simulations on high-performance-computing (HPC) clusters in parallel. A checking time interval is introduced to control the optimization process. The A-DGN optimizer checks the status of all running simulations after every checking time frame. A new simulation case is proposed immediately once the simulation of an optimization thread is completed, without waiting for the completion of other simulations. Thus, each A-DGN optimization thread becomes independent. The two A-DGN optimization algorithms are 1) the local-search algorithm to locate multiple maximum-a-posteriori (MAP) estimates and 2) the integrated global-search algorithm with the randomized-maximum-likelihood (RML) method to generate hundreds of RML samples in parallel for uncertainty quantification. We modified the training-data data set updating algorithm using the iteration index for each thread to implement the asynchronous running mode. The sensitivity matrix at the best solution of each optimization thread is estimated by linear interpolation of a subset of the training data closest to the best solution, using the modified QR decomposition method. A new simulation case (or search point) is generated by solving the Gauss-Newton trust-region subproblem (GNTRS), together with the estimated sensitivity matrix, using the more efficient and robust GNTRS solver that we developed recently. The proposed A-DGN optimization method is tested and validated on a synthetic problem and then applied to a real-field deep-water reservoir model. Numerical tests confirm that the proposed A-DGN optimization method can converge to solutions with matching quality comparable to those obtained by the S-DGN optimizer, saving on the time required for the optimizer to converge by a factor ranging from 1.3 to 2 when compared to the S-DGN optimizer depending on the problem. The new A-DGN optimization algorithm presented in this paper helps improve efficiency and robustness in solving history-matching or inversion problems, especially for uncertainty quantification of subsurface model parameters and production forecasts of real-field reservoirs by conditioning to production data.
10

PITZ, EMIL, SEAN ROONEY e KISHORE POCHIRAJU. "MODELING AND CALIBRATION OF UNCERTAINTY IN MATERIAL PROPERTIES OF ADDITIVELY MANUFACTURED COMPOSITES". In Thirty-sixth Technical Conference. Destech Publications, Inc., 2021. http://dx.doi.org/10.12783/asc36/35758.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Simulations quantifying the uncertainty in structural response and damage evolution require accurate representation of the randomness of the underlying material stiffness and strength behaviors. In this paper, the mean and variance descriptions of variability of strength and stiffness of additively manufactured composite specimens are augmented with random field correlation descriptors that represent the process dependence on the property heterogeneity through microstructure variations. Two correlation lengths and a rotation parameter are introduced into randomized stiffness and strength distribution fields to capture the local heterogeneities in the microstructure of Additively Manufactured (AM) composites. We formulated a simulation and Artificial Intelligence (AI)-based technique to calibrate the correlation length and rotation parameter measures from relatively few samples of experimentally obtained strain field observations using Digital Image Correlation (DIC). The neural networks used for calibrating the correlation lengths of Karhunen-Loève Expansion (KL expansion) from the DIC images are trained using simulated stiffness and strength fields that have known correlation coefficients. A virtual DIC filter is used to add the noise and artifacts from typical DIC analysis to the simulated strain fields. A Deep Neural Network (DNN), whose architecture is optimized using Efficient Neural Architecture Search (ENAS), is trained on 150,000 simulated DIC images. The trained DNN is then used for calibration of KL expansion correlation lengths for additively manufactured composite specimens. The AM composites are loaded in tension and DIC images of the strain fields are generated and presented to the DNNs, which produce the correlation coefficients for the random fields as outputs. Compared to classical optimization methods to calibrate model parameters iteratively, neural networks, once trained, efficiently and quickly predict parameters without the need for a robust simulator and optimization methods.

Vai alla bibliografia