Journal articles on the topic 'Gauss-newton search'

To see the other types of publications on this topic, follow the link: Gauss-newton search.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 39 journal articles for your research on the topic 'Gauss-newton search.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Chen, Ke, and Mauricio D. Sacchi. "Time-domain elastic Gauss–Newton full-waveform inversion: a matrix-free approach." Geophysical Journal International 223, no. 2 (July 21, 2020): 1007–39. http://dx.doi.org/10.1093/gji/ggaa330.

Full text
Abstract:
SUMMARY We present a time-domain matrix-free elastic Gauss–Newton full-waveform inversion (FWI) algorithm. Our algorithm consists of a Gauss–Newton update with a search direction calculated via elastic least-squares reverse time migration (LSRTM). The conjugate gradient least-squares (CGLS) method solves the LSRTM problem with forward and adjoint operators derived via the elastic Born approximation. The Hessian of the Gauss–Newton method is never explicitly formed or saved in memory. In other words, the CGLS algorithm solves for the Gauss–Newton direction via the application of implicit-form forward and adjoint operators which are equivalent to elastic Born modelling and elastic reverse time migration, respectively. We provide numerical examples to test the proposed algorithm where we invert for P- and S-wave velocities simultaneously. The proposed algorithm performs positively on mid-size problems where we report solutions of slight improvement than those computed using the conventional non-linear conjugate gradient method. In spite of the aforementioned limited gain, the theory developed in this paper contributes to a better understanding of time-domain elastic Gauss–Newton FWI.
APA, Harvard, Vancouver, ISO, and other styles
2

CHIANG, TIHAO, and YA-QIN ZHANG. "STEREOSCOPIC VIDEO CODING USING A FAST AND ROBUST AFFINE MOTION SEARCH." International Journal of Image and Graphics 01, no. 02 (April 2001): 231–50. http://dx.doi.org/10.1142/s0219467801000153.

Full text
Abstract:
This paper presents a stereoscopic video compression scheme using a novel fast affine motion estimation technique. A temporal scalable approach is used to achieve backward compatibility with a standard definition TV. We use an adaptive mode selection scheme from three temporal locations in both channels. Both block-based and affine-motion based approaches are used to achieve two levels of improvements with different complexities. An innovative motion estimation technique using Gauss–Newton optimization and pyramid processing is implemented to efficiently estimate affine parameters. Unlike other Gauss–Newton approaches, our search technique uses only addition, subtraction and multiplication and it converges within four iterations, which implies great complexity reduction. An efficient and robust affine motion prediction yields significant over the disparity-based approach. Part of the disparity-based approach has been tested in the rigorous MPEG-2 bitstream exchange process, and adopted in the MPEG-2 Multi-View Profile (MVP).
APA, Harvard, Vancouver, ISO, and other styles
3

de Klerk, E., J. Peng, C. Roos, and T. Terlaky. "A Scaled Gauss--Newton Primal-Dual Search Direction for Semidefinite Optimization." SIAM Journal on Optimization 11, no. 4 (January 2001): 870–88. http://dx.doi.org/10.1137/s1052623499352632.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

De Zaeytijd, Jürgen, and Ann Franchois. "A Subspace Preconditioned LSQR Gauss-Newton Method with a Constrained Line Search Path Applied to 3D Biomedical Microwave Imaging." International Journal of Antennas and Propagation 2015 (2015): 1–21. http://dx.doi.org/10.1155/2015/924067.

Full text
Abstract:
Three contributions that can improve the performance of a Newton-type iterative quantitative microwave imaging algorithm in a biomedical context are proposed. (i) To speed up the iterative forward problem solution, we extrapolate the initial guess of the field from a few field solutions corresponding to previous source positions for the same complex permittivity (i.e., “marching on in source position”) as well as from a Born-type approximation that is computed from a field solution corresponding to one previous complex permittivity profile for the same source position. (ii) The regularized Gauss-Newton update system can be ill-conditioned; hence we propose to employ a two-level preconditioned iterative solution method. We apply the subspace preconditioned LSQR algorithm from Jacobsen et al. (2003) and we employ a 3D cosine basis. (iii) We propose a new constrained line search path in the Gauss-Newton optimization, which incorporates in a smooth manner lower and upper bounds on the object permittivity, such that these bounds never can be violated along the search path. Single-frequency reconstructions from bipolarized synthetic data are shown for various three-dimensional numerical biological phantoms, including a realistic breast phantom from the University of Wisconsin-Madison (UWCEM) online repository.
APA, Harvard, Vancouver, ISO, and other styles
5

Acarnley, Paul. "Power System Load Flow Analysis Using an Excel Workbook." International Journal of Electrical Engineering & Education 42, no. 2 (April 2005): 185–202. http://dx.doi.org/10.7227/ijeee.42.2.6.

Full text
Abstract:
The paper describes the development and features of an MS-Excel Workbook (available at www.reseeds.com ), which illustrates four methods of power system load flow analysis. Iterative techniques are represented by the Newton-Raphson and Gauss-Seidel methods. The Workbook also includes two search algorithms: genetic algorithms and simulated annealing.
APA, Harvard, Vancouver, ISO, and other styles
6

BAKIR, PELIN GUNDES, and Y. SERHAT ERDOGAN. "DAMAGE IDENTIFICATION IN REINFORCED CONCRETE BEAMS BY FINITE ELEMENT MODEL UPDATING USING PARALLEL AND HYBRID GENETIC ALGORITHMS." International Journal of Computational Methods 10, no. 03 (April 17, 2013): 1350010. http://dx.doi.org/10.1142/s0219876213500102.

Full text
Abstract:
Finite element (FE) model updating belongs to the class of inverse problems in mechanics and is a constrained optimization problem. In FE model updating, the difference between the modal parameters (the frequencies, damping ratios and the mode shapes) obtained from the FE model of the structure and those from the vibration measurements are minimized within an optimization algorithm. The design variables of the optimization problem are the stiffness reduction factors, which represent the damage. In this study, the Genetic Algorithms (GA), the Parallel GA, the local search algorithms, the Trust Region Gauss Newton, the Sequential Quadratic Programming, the Levenberg–Marquardt Techniques and the hybrid versions of these methods are applied within the FE Model Updating Technique for updating the Young's modulus of different FEs of a reinforced concrete beam. Different damage scenarios and different noise levels are taken into account. The results of the study show that the local search algorithms cannot detect, locate and quantify damage in reinforced concrete beam type structures while the GA together with the hybrid and the parallel versions detect, localize and identify the damage very accurately. It is apparent that the hybrid GA & Trust Region Gauss Newton Technique is best in terms of the computation speed as well as accuracy.
APA, Harvard, Vancouver, ISO, and other styles
7

Lian, Zhigang, Songhua Wang, and Yangquan Chen. "A Velocity-Combined Local Best Particle Swarm Optimization Algorithm for Nonlinear Equations." Mathematical Problems in Engineering 2020 (August 25, 2020): 1–9. http://dx.doi.org/10.1155/2020/6284583.

Full text
Abstract:
Many people use traditional methods such as quasi-Newton method and Gauss–Newton-based BFGS to solve nonlinear equations. In this paper, we present an improved particle swarm optimization algorithm to solve nonlinear equations. The novel algorithm introduces the historical and local optimum information of particles to update a particle’s velocity. Five sets of typical nonlinear equations are employed to test the quality and reliability of the novel algorithm search comparing with the PSO algorithm. Numerical results show that the proposed method is effective for the given test problems. The new algorithm can be used as a new tool to solve nonlinear equations, continuous function optimization, etc., and the combinatorial optimization problem. The global convergence of the given method is established.
APA, Harvard, Vancouver, ISO, and other styles
8

Šeruga, Domen, and Marko Nagode. "Comparative analysis of optimisation methods for linking material parameters of exponential and power models: An application to cyclic stress–strain curves of ferritic stainless steel." Proceedings of the Institution of Mechanical Engineers, Part L: Journal of Materials: Design and Applications 233, no. 9 (August 13, 2018): 1802–13. http://dx.doi.org/10.1177/1464420718790829.

Full text
Abstract:
The four most commonly used optimisation methods for linking the material parameters of an exponential Armstrong–Frederick and a power Ramberg–Osgood model are compared for given cyclic stress–strain curves of a ferritic stainless steel EN 1.4512. These methods are the damped Gauss–Newton method, the Levenberg–Marquardt method, the Downhill Simplex method and a genetic algorithm. Globally optimal material parameters are obtained by parallel searches within the methods. The methods are tested for cyclic curves at 20 ℃, 300 ℃, 650 ℃ and 850 ℃. The optimal values of material parameters and R2 values are comparable, whereas the search paths, the numbers of steps to reach optimal solutions and the processing time of the methods differ.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Jueyu, and Detong Zhu. "The Gauss-Newton Methods via Conjugate Gradient Path without Line Search Technique for Solving Nonlinear Systems." Numerical Functional Analysis and Optimization 38, no. 1 (September 14, 2016): 110–37. http://dx.doi.org/10.1080/01630563.2016.1232729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lampariello, F., and M. Sciandrone. "Use of the Minimum-Norm Search Direction in a Nonmonotone Version of the Gauss-Newton Method." Journal of Optimization Theory and Applications 119, no. 1 (October 2003): 65–82. http://dx.doi.org/10.1023/b:jota.0000005041.99777.af.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Tett, Simon F. B., Kuniko Yamazaki, Michael J. Mineter, Coralia Cartis, and Nathan Eizenberg. "Calibrating climate models using inverse methods: case studies with HadAM3, HadAM3P and HadCM3." Geoscientific Model Development 10, no. 9 (September 28, 2017): 3567–89. http://dx.doi.org/10.5194/gmd-10-3567-2017.

Full text
Abstract:
Abstract. Optimisation methods were successfully used to calibrate parameters in an atmospheric component of a climate model using two variants of the Gauss–Newton line-search algorithm: (1) a standard Gauss–Newton algorithm in which, in each iteration, all parameters were perturbed and (2) a randomised block-coordinate variant in which, in each iteration, a random sub-set of parameters was perturbed. The cost function to be minimised used multiple large-scale multi-annual average observations and was constrained to produce net radiative fluxes close to those observed. These algorithms were used to calibrate the HadAM3 (third Hadley Centre Atmospheric Model) model at N48 resolution and the HadAM3P model at N96 resolution.For the HadAM3 model, cases with 7 and 14 parameters were tried. All ten 7-parameter cases using HadAM3 converged to cost function values similar to that of the standard configuration. For the 14-parameter cases several failed to converge, with the random variant in which 6 parameters were perturbed being most successful. Multiple sets of parameter values were found that produced multiple models very similar to the standard configuration. HadAM3 cases that converged were coupled to an ocean model and run for 20 years starting from a pre-industrial HadCM3 (3rd Hadley Centre Coupled model) state resulting in several models whose global-average temperatures were consistent with pre-industrial estimates. For the 7-parameter cases the Gauss–Newton algorithm converged in about 70 evaluations. For the 14-parameter algorithm, with 6 parameters being randomly perturbed, about 80 evaluations were needed for convergence. However, when 8 parameters were randomly perturbed, algorithm performance was poor. Our results suggest the computational cost for the Gauss–Newton algorithm scales between P and P2, where P is the number of parameters being calibrated.For the HadAM3P model three algorithms were tested. Algorithms in which seven parameters were perturbed and three out of seven parameters randomly perturbed produced final configurations comparable to the standard hand-tuned configuration. An algorithm in which 6 out of 13 parameters were randomly perturbed failed to converge.These results suggest that automatic parameter calibration using atmospheric models is feasible and that the resulting coupled models are stable. Thus, automatic calibration could replace human-driven trial and error. However, convergence and costs are likely sensitive to details of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
12

Pan, Wenyong, Kristopher A. Innanen, and Wenyuan Liao. "Accelerating Hessian-free Gauss-Newton full-waveform inversion via l-BFGS preconditioned conjugate-gradient algorithm." GEOPHYSICS 82, no. 2 (March 1, 2017): R49—R64. http://dx.doi.org/10.1190/geo2015-0595.1.

Full text
Abstract:
Full-waveform inversion (FWI) has emerged as a powerful strategy for estimating subsurface model parameters by iteratively minimizing the difference between synthetic data and observed data. The Hessian-free (HF) optimization method represents an attractive alternative to Newton-type and gradient-based optimization methods. At each iteration, the HF approach obtains the search direction by approximately solving the Newton linear system using a matrix-free conjugate-gradient (CG) algorithm. The main drawback with HF optimization is that the CG algorithm requires many iterations. In our research, we develop and compare different preconditioning schemes for the CG algorithm to accelerate the HF Gauss-Newton (GN) method. Traditionally, preconditioners are designed as diagonal Hessian approximations. We additionally use a new pseudo diagonal GN Hessian as a preconditioner, making use of the reciprocal property of Green’s function. Furthermore, we have developed an [Formula: see text]-BFGS inverse Hessian preconditioning strategy with the diagonal Hessian approximations as an initial guess. Several numerical examples are carried out. We determine that the quasi-Newton [Formula: see text]-BFGS preconditioning scheme with the pseudo diagonal GN Hessian as the initial guess is most effective in speeding up the HF GN FWI. We examine the sensitivity of this preconditioning strategy to random noise with numerical examples. Finally, in the case of multiparameter acoustic FWI, we find that the [Formula: see text]-BFGS preconditioned HF GN method can reconstruct velocity and density models better and more efficiently compared with the nonpreconditioned method.
APA, Harvard, Vancouver, ISO, and other styles
13

Brandstätter, Bernhard, and Christian Magele. "Hierarchical simulated annealing vs. a Gauss‐Newton scheme applying analytical Jacobians for the solution of a source current distribution problem." COMPEL - The international journal for computation and mathematics in electrical and electronic engineering 20, no. 2 (June 1, 2001): 497–506. http://dx.doi.org/10.1108/03321640110383375.

Full text
Abstract:
Considers, without loss of generality, a simple linear problem, where in a certain domain the magnetic field, generated by infinitely long conductors, whose locations as well as the currents are unknown, has to meet a certain figure. The problem is solved by applying hierarchical simulated annealing, which iteratively reduces the dimension of the search space to save computational cost. A Gauss‐Newton scheme, making use of analytical Jacobians, preceding a sequential quadratic program (SQP), will be applied as a second approach to tackle this severely ill‐posed problem. The results of these two techniques will be analyzed and discussed and some comments on future work will be given.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Jin, Ke Chen, Fang Chen, and Bo Yu. "An Efficient Numerical Method for Mean Curvature-Based Image Registration Model." East Asian Journal on Applied Mathematics 7, no. 1 (January 31, 2017): 125–42. http://dx.doi.org/10.4208/eajam.200816.031216a.

Full text
Abstract:
AbstractMean curvature-based image registration model firstly proposed by Chumchob-Chen-Brito (2011) offered a better regularizer technique for both smooth and nonsmooth deformation fields. However, it is extremely challenging to solve efficiently this model and the existing methods are slow or become efficient only with strong assumptions on the smoothing parameterβ. In this paper, we take a different solution approach. Firstly, we discretize the joint energy functional, following an idea of relaxed fixed point is implemented and combine with Gauss-Newton scheme with Armijo's Linear Search for solving the discretized mean curvature model and further to combine with a multilevel method to achieve fast convergence. Numerical experiments not only confirm that our proposed method is efficient and stable, but also it can give more satisfying registration results according to image quality.
APA, Harvard, Vancouver, ISO, and other styles
15

Gao, Guohua, and Albert C. Reynolds. "An Improved Implementation of the LBFGS Algorithm for Automatic History Matching." SPE Journal 11, no. 01 (March 1, 2006): 5–17. http://dx.doi.org/10.2118/90058-pa.

Full text
Abstract:
Summary For large scale history matching problems, where it is not feasible to compute individual sensitivity coefficients, the limited memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) is an efficient optimization algorithm, (Zhang and Reynolds, 2002; Zhang, 2002). However, computational experiments reveal that application of the original implementation of LBFGS may encounter the following problems:converge to a model which gives an unacceptable match of production data;generate a bad search direction that either leads to false convergence or a restart with the steepest descent direction which radically reduces the convergence rate;exhibit overshooting and undershooting, i.e., converge to a vector of model parameters which contains some abnormally high or low values of model parameters which are physically unreasonable. Overshooting and undershooting can occur even though all history matching problems are formulated in a Bayesian framework with a prior model providing regularization. We show that the rate of convergence and the robustness of the algorithm can be significantly improved by:a more robust line search algorithm motivated by the theoretical result that the Wolfe conditions should be satisfied;an application of a data damping procedure at early iterations orenforcing constraints on the model parameters. Computational experiments also indicate thata simple rescaling of model parameters prior to application of the optimization algorithm can improve the convergence properties of the algorithm although the scaling procedure used can not be theoretically validated. Introduction Minimization of a smooth objective function is customarily done using a gradient based optimization algorithm such as the Gauss- Newton (GN) method or Levenberg-Marquardt (LM) algorithm. The standard implementations of these algorithms (Tan and Kalogerakis, 1991; Wu et al., 1999; Li et al., 2003), however, require the computation of all sensitivity coefficients in order to formulate the Hessian matrix. We are interested in history matching problems where the number of data to be matched ranges from a few hundred to several thousand and the number of reservoir variables or model parameters to be estimated or simulated ranges from a few hundred to a hundred thousand or more. For the larger problems in this range, the computer resources required to compute all sensitivity coefficients would prohibit the use of the standard Gauss- Newton and Levenberg-Marquardt algorithms. Even for the smallest problems in this range, computation of all sensitivity coefficients may not be feasible as the resulting GN and LM algorithms may require the equivalent of several hundred simulation runs. The relative computational efficiency of GN, LM, nonlinear conjugate gradient and quasi-Newton methods have been discussed in some detail by Zhang and Reynolds (2002) and Zhang (2002).
APA, Harvard, Vancouver, ISO, and other styles
16

Chen, Chaohui, Guohua Gao, Ruijian Li, Richard Cao, Tianhong Chen, Jeroen C. Vink, and Paul Gelderblom. "Global-Search Distributed-Gauss-Newton Optimization Method and Its Integration With the Randomized-Maximum-Likelihood Method for Uncertainty Quantification of Reservoir Performance." SPE Journal 23, no. 05 (May 11, 2018): 1496–517. http://dx.doi.org/10.2118/182639-pa.

Full text
Abstract:
Summary Although it is possible to apply traditional optimization algorithms together with the randomized-maximum-likelihood (RML) method to generate multiple conditional realizations, the computation cost is high. This paper presents a novel method to enhance the global-search capability of the distributed-Gauss-Newton (DGN) optimization method and integrates it with the RML method to generate multiple realizations conditioned to production data synchronously. RML generates samples from an approximate posterior by minimizing a large ensemble of perturbed objective functions in which the observed data and prior mean values of uncertain model parameters have been perturbed with Gaussian noise. Rather than performing these minimizations in isolation using large sets of simulations to evaluate the finite-difference approximations of the gradients used to optimize each perturbed realization, we use a concurrent implementation in which simulation results are shared among different minimization tasks whenever these results are helping to converge to the global minimum of a specific minimization task. To improve sharing of results, we relax the accuracy of the finite-difference approximations for the gradients with more widely spaced simulation results. To avoid trapping in local optima, a novel method to enhance the global-search capability of the DGN algorithm is developed and integrated seamlessly with the RML formulation. In this way, we can improve the quality of RML conditional realizations that sample the approximate posterior. The proposed work flow is first validated with a toy problem and then applied to a real-field unconventional asset. Numerical results indicate that the new method is very efficient compared with traditional methods. Hundreds of data-conditioned realizations can be generated in parallel within 20 to 40 iterations. The computational cost (central-processing-unit usage) is reduced significantly compared with the traditional RML approach. The real-field case studies involve a history-matching study to generate history-matched realizations with the proposed method and an uncertainty quantification of production forecasting using those conditioned models. All conditioned models generate production forecasts that are consistent with real-production data in both the history-matching period and the blind-test period. Therefore, the new approach can enhance the confidence level of the estimated-ultimate-recovery (EUR) assessment using production-forecasting results generated from all conditional realizations, resulting in significant business impact.
APA, Harvard, Vancouver, ISO, and other styles
17

Thoma, Andreas, Abhijith Moni, and Sridhar Ravi. "Significance of Parallel Computing on the Performance of Digital Image Correlation Algorithms in MATLAB." Designs 5, no. 1 (March 3, 2021): 15. http://dx.doi.org/10.3390/designs5010015.

Full text
Abstract:
Digital Image Correlation (DIC) is a powerful tool used to evaluate displacements and deformations in a non-intrusive manner. By comparing two images, one from the undeformed reference states of the sample and the other from the deformed target state, the relative displacement between the two states is determined. DIC is well-known and often used for post-processing analysis of in-plane displacements and deformation of the specimen. Increasing the analysis speed to enable real-time DIC analysis will be beneficial and expand the scope of this method. Here we tested several combinations of the most common DIC methods in combination with different parallelization approaches in MATLAB and evaluated their performance to determine whether the real-time analysis is possible with these methods. The effects of computing with different hardware settings were also analyzed and discussed. We found that implementation problems can reduce the efficiency of a theoretically superior algorithm, such that it becomes practically slower than a sub-optimal algorithm. The Newton–Raphson algorithm in combination with a modified particle swarm algorithm in parallel image computation was found to be most effective. This is contrary to theory, suggesting that the inverse-compositional Gauss–Newton algorithm is superior. As expected, the brute force search algorithm is the least efficient method. We also found that the correct choice of parallelization tasks is critical in attaining improvements in computing speed. A poorly chosen parallelization approach with high parallel overhead leads to inferior performance. Finally, irrespective of the computing mode, the correct choice of combinations of integer-pixel and sub-pixel search algorithms is critical for efficient analysis. The real-time analysis using DIC will be difficult on computers with standard computing capabilities, even if parallelization is implemented, so the suggested solution would be to use graphics processing unit (GPU) acceleration.
APA, Harvard, Vancouver, ISO, and other styles
18

Petrov, Petr V., and Gregory A. Newman. "Estimation of seismic source parameters in 3D elastic media using the reciprocity theorem." GEOPHYSICS 84, no. 6 (November 1, 2019): R963—R976. http://dx.doi.org/10.1190/geo2018-0283.1.

Full text
Abstract:
We have developed a novel method based upon reciprocity principles to simultaneously estimate the location of a seismic event and its source mechanism in 3D heterogeneous media. The method finds double-couple (DC) and non-DC mechanisms of microearthquakes arising from localized induced and natural seismicity. Because the method uses an exhaustive search of the 3D elastic media, it is globally convergent. It does not suffer from local minima realization observed with local optimization methods, including Newton, Gauss-Newton, or gradient-descent algorithms. The computational efficiency of our scheme is derived from the reciprocity principle, in which the number of 3D model realizations corresponds to the number of measurement receivers. The 3D forward modeling is carried out in the damped Fourier domain with a 3D finite-difference frequency-domain fourth- and second-order code developed to simulate elastic waves generated by seismic sources defined by forces and second-order moment density tensors. We evaluate the results of testing this new methodology on synthetic data for the Raft River geothermal field, Idaho, as well as determine its applicability in designing optimal borehole monitoring arrays in a fracking experiment at the Homestake Mine, South Dakota. We also find that the method proposed here can retrieve the moment tensors of the space distributed source with data arising from spatially restricted arrays with limited aperture. The effects of uncertainties on the source parameter estimation are also examined with respect to data noise and model uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
19

Nguyen, Frederic, Andreas Kemna, Tanguy Robert, and Thomas Hermans. "Data-driven selection of the minimum-gradient support parameter in time-lapse focused electric imaging." GEOPHYSICS 81, no. 1 (January 1, 2016): A1—A5. http://dx.doi.org/10.1190/geo2015-0226.1.

Full text
Abstract:
We have considered the problem of the choice of the minimum-gradient support (MGS) parameter in focused inversion for time-lapse (TL) electric resistivity tomography. Most existing approaches have relied either on an arbitrary choice of this parameter or one based on the prior information, such as the expected contrast in the TL image. We have decided to select the MGS parameter using a line search based on the value of the TL data root-mean-square misfit at the first iteration of the nonlinear inversion procedure. The latter was based on a Gauss-Newton scheme minimizing a regularized objective function in which the regularization functional was defined by the MGS functional. The regularization parameter was optimized to achieve a certain target level, following the Occam principles. We have validated our approach on a synthetic benchmark using a complex and heterogeneous model and determined its effectiveness on electric tomography TL data collected during a salt tracer experiment in fractured limestone. Our results have determined that the approach was successful in retrieving the focused anomaly and did not rely on prior information.
APA, Harvard, Vancouver, ISO, and other styles
20

Xiao, Zhuolei, Yerong Zhang, Kaixuan Zhang, Dongxu Zhao, and Guan Gui. "GARLM: Greedy Autocorrelation Retrieval Levenberg–Marquardt Algorithm for Improving Sparse Phase Retrieval." Applied Sciences 8, no. 10 (October 1, 2018): 1797. http://dx.doi.org/10.3390/app8101797.

Full text
Abstract:
The goal of phase retrieval is to recover an unknown signal from the random measurements consisting of the magnitude of its Fourier transform. Due to the loss of the phase information, phase retrieval is considered as an ill-posed problem. Conventional greedy algorithms, e.g., greedy spare phase retrieval (GESPAR), were developed to solve this problem by using prior knowledge of the unknown signal. However, due to the defect of the Gauss–Newton method in the local convergence problem, especially when the residual is large, it is very difficult to use this method in GESPAR to efficiently solve the non-convex optimization problem. In order to improve the performance of the greedy algorithm, we propose an improved phase retrieval algorithm, which is called the greedy autocorrelation retrieval Levenberg–Marquardt (GARLM) algorithm. Specifically, the proposed GARLM algorithm is a local search iterative algorithm to recover the sparse signal from its Fourier transform magnitude. The proposed algorithm is preferred to existing greedy methods of phase retrieval, since at each iteration the problem of minimizing the objective function over a given support is solved by using the improved Levenberg–Marquardt (ILM) method and matrix transform. A local search procedure such as the 2-opt method is then invoked to get the optimal estimation. Simulation results are given to show that the proposed algorithm performs better than the conventional GESPAR algorithm.
APA, Harvard, Vancouver, ISO, and other styles
21

Gao, Guohua, Jeroen C. Vink, Chaohui Chen, Faruk O. Alpak, and Kuifu Du. "A Parallelized and Hybrid Data-Integration Algorithm for History Matching of Geologically Complex Reservoirs." SPE Journal 21, no. 06 (May 16, 2016): 2155–74. http://dx.doi.org/10.2118/175039-pa.

Full text
Abstract:
Summary It is extremely challenging to design effective assisted-history-matching (AHM) methods for complex geological models with discrete-facies types. One of the difficulties is the irregular and nonsmooth nature of the data-mismatch function that needs to be minimized, because of either numerical noise on simulation results or nonsmooth reparameterization. In this paper, a parallelized direct-pattern-search (DPS) approach with auto-adaptive pattern-size updating is developed to guarantee the convergence of the data-mismatch minimization, even when the objective function is nonsmooth because of numerical noise. A trust-region variant of the Gauss-Newton (GN) or quasi-Newton (QN) method is effectively combined with the noise-insensitive DPS method to enhance its performance by exploiting any available smoothness features of the objective function. The new approach is first validated by a linear toy problem and a nonlinear toy problem where artificial numerical noise is introduced. Then, it is applied to a synthetic case and a real field case for history matching of channelized-turbidite reservoirs with three facies types. The model parameters subject to AHM include principal component analysis (PCA) coefficients, which automatically reconstruct the facies indicators and permeability, porosity, and net-to-gross maps. Other matching parameters such as aquifer strength and fault transmissibility are also included. Numerical tests indicate that the hybrid algorithms perform better than the traditional QN line-search algorithms and the original Hooke-Jeeves DPS algorithm (Hooke and Jeeves 1961). The hybrid algorithms either can converge to a satisfactory solution with the same accuracy using lower cost or find a better solution with the same cost, especially for cases where adjoint derivatives are unavailable and numerical noise is unavoidable from reservoir simulation. The GN-DPS algorithm performs the best among all tested algorithms. The history-matched reservoir models obtained with the new AHM approach (GN-DPS combined with pluri-PCA) honor the production measurements with good accuracy. For both the synthetic and real cases, the history-matched reservoir models preserve geological realism.
APA, Harvard, Vancouver, ISO, and other styles
22

Lu, J., F. Bao, and Y. Zhao. "Evaluation of Pettys Nonlinear Model in Wood Permeability Measurement." Holzforschung 55, no. 1 (December 14, 2001): 82–86. http://dx.doi.org/10.1515/hf.2001.013.

Full text
Abstract:
Summary To calculate the effective radii of two conductive elements in series in wood specimens by using the gas permeability measurement, the four parameters from the curvilinear relationship of superficial specific permeability against reciprocal mean pressure as illustrated in Petty's model must be evaluated. This paper describes a detailed procedure for obtaining such parameters by using the least-squares fit calculated from a statistical analysis system (SAS) program. Three different iterative optimization algorithms and starting points were used separately to fit the Petty's nonlinear model based on the same experimental data from one specimen of birch. The estimate of the parameters: A = 35.38 darcy, B = 80.51 darcy, l = 0.19 darcy atm, m = 6.34 darcy atm was recommended for the fitted model. Compared to the results on the estimate of parameters obtained in the previous papers, this estimate for the parameters was a global minimum, thus it was a refinement and more accurate. Since the Gauss-Newton method resulted in almost the same convergence results for all the three sets of starting values with the least iterations in the evaluation, it was the preferred optimization algorithm both for simplicity and accuracy in solving the Petty's model. Because the same solutions for all three iterative optimization algorithms were obtained by using two different sets of starting points produced from the grid search, a grid search seemed to be very helpful for finding reasonable starting values for various iterative optimization techniques.
APA, Harvard, Vancouver, ISO, and other styles
23

Sant’Ana, Taíla Crístia Souza, and Edson Emanoel Starteri Sampaio. "ANALYSIS OF COMPLEX APPARENT RESISTIVITY DATA CONJUGATING SPECTRAL INDUCED POLARIZATION AND ELECTROMAGNETIC COUPLING." Revista Brasileira de Geofísica 36, no. 3 (September 6, 2018): 1. http://dx.doi.org/10.22564/rbgf.v36i3.1958.

Full text
Abstract:
ABSTRACT. The induced polarization characteristic is to provide geophysical and geological information via geoelectric parameters, making possible mineral discrimination in the scope of mineral exploration. Although represents one of the main noises in measurements of this method, electromagnetic coupling between current and potential electrodes also contributes to the understanding of the geological scenario. Thus, the most appropriate way to deal with such data is an integrated study of these two phenomena, taking into account their particularities. Forward modelling and Gauss-Newton inversion of the mutual impedance in the frequency domain provide the analysis of the complex apparent resistivity considering both spectral induced polarization and electromagnetic coupling for homogeneous and one-dimensional, non-polarizable and polarizable Earth models. Besides synthetic data, this new approach was applied to data from the Copper District of Vale do Curaçá, Bahia, Brazil. The results reveal the ability of the method to distinguish between induction, dominant at the highest frequencies, and induced polarization, which varies with depth and frequency. It also may constitute a basis for mineral discrimination with the analysis of analogous circuit parameters, a fundamental tool in the search for metallic targets in mineral exploration.Keywords: Forward Modelling, Geophysical Inversion, Electromagnetic Method, Mineral Exploration.RESUMO. A polarização induzida espectral se destaca por fornecer diversas informações geofísico-geológicas através dos parâmetros geoelétricos, viabilizando a discriminação mineral no âmbito da exploração mineral. Embora constitua um dos principais ruídos nas medidas desse método, o acoplamento eletromagnético entre eletrodos de corrente e potencial também auxilia na compreensão do cenário geológico. Dessa forma, a maneira mais adequada de lidar com tais dados espectrais é o estudo integrado desses dois fenômenos, levando em conta suas particularidades. A modelagem direta e a inversão Gauss-Newton da impedância elétrica mútua no domínio da frequência proporcionam a análise da resistividade complexa aparente considerando tanto a polarização induzida espectral como o acoplamento eletromagnético para modelos de terra homogênea e uni-dimensional, polarizável e não-polarizável. Além do dado sintético, essa nova abordagem foi aplicada a dados reais do Distrito Cuprífero do Vale do Curaçá, Bahia, Brasil. Os resultados revelam a capacidade do método em distinguir o efeito indutivo, dominante nas mais altas frequências, e a variação da polarização induzida com a profundidade e frequência. Isso contitui um estudo base para a discriminação mineral por meio da análise de parâmetros de circuitos análogos, uma ferramenta fundamental na investigação de alvos em exploração mineral.Palavras-chave: Modelagem direta, Inversão Geofísica, Método Eletromagnético, Exploração Mineral. 1Universidade
APA, Harvard, Vancouver, ISO, and other styles
24

Abubakar, A., M. Li, G. Pan, J. Liu, and T. M. Habashy. "Joint MT and CSEM data inversion using a multiplicative cost function approach." GEOPHYSICS 76, no. 3 (May 2011): F203—F214. http://dx.doi.org/10.1190/1.3560898.

Full text
Abstract:
We have developed an inversion algorithm for jointly inverting controlled-source electromagnetic (CSEM) data and magnetotelluric (MT) data. It is well known that CSEM and MT data provide complementary information about the subsurface resistivity distribution; hence, it is useful to derive earth resistivity models that simultaneously and consistently fit both data sets. Because we are dealing with a large-scale computational problem, one usually uses an iterative technique in which a predefined cost function is optimized. One of the issues of this simultaneous joint inversion approach is how to assign the relative weights on the CSEM and MT data in constructing the cost function. We propose a multiplicative cost function instead of the traditional additive one. This function does not require an a priori choice of the relative weights between these two data sets. It will adaptively put CSEM and MT data on equal footing in the inversion process. The inversion is accomplished with a regularized Gauss-Newton minimization scheme where the model parameters are forced to lie within their upper and lower bounds by a nonlinear transformation procedure. We use a line search scheme to enforce a reduction of the cost function at each iteration. We tested our joint inversion approach on synthetic and field data.
APA, Harvard, Vancouver, ISO, and other styles
25

Abubakar, A., T. M. Habashy, V. L. Druskin, L. Knizhnerman, and D. Alumbaugh. "2.5D forward and inverse modeling for interpreting low-frequency electromagnetic measurements." GEOPHYSICS 73, no. 4 (July 2008): F165—F177. http://dx.doi.org/10.1190/1.2937466.

Full text
Abstract:
We present 2.5D fast and rigorous forward and inversion algorithms for deep electromagnetic (EM) applications that include crosswell and controlled-source EM measurements. The forward algorithm is based on a finite-difference approach in which a multifrontal LU decomposition algorithm simulates multisource experiments at nearly the cost of simulating one single-source experiment for each frequency of operation. When the size of the linear system of equations is large, the use of this noniterative solver is impractical. Hence, we use the optimal grid technique to limit the number of unknowns in the forward problem. The inversion algorithm employs a regularized Gauss-Newton minimization approach with a multiplicative cost function. By using this multiplicative cost function, we do not need a priori data to determine the so-called regularization parameter in the optimization process, making the algorithm fully automated. The algorithm is equipped with two regularization cost functions that allow us to reconstruct either a smooth or a sharp conductivity image. To increase the robustness of the algorithm, we also constrain the minimization and use a line-search approach to guarantee the reduction of the cost function after each iteration. To demonstrate the pros and cons of the algorithm, we present synthetic and field data inversion results for crosswell and controlled-source EM measurements.
APA, Harvard, Vancouver, ISO, and other styles
26

Tett, Simon F. B., Michael J. Mineter, Coralia Cartis, Daniel J. Rowlands, and Ping Liu. "Can Top-of-Atmosphere Radiation Measurements Constrain Climate Predictions? Part I: Tuning." Journal of Climate 26, no. 23 (December 2013): 9348–66. http://dx.doi.org/10.1175/jcli-d-12-00595.1.

Full text
Abstract:
Perturbed physics configurations of version 3 of the Hadley Centre Atmosphere Model (HadAM3) driven with observed sea surface temperatures (SST) and sea ice were tuned to outgoing radiation observations using a Gauss–Newton line search optimization algorithm to adjust the model parameters. Four key parameters that previous research found affected climate sensitivity were adjusted to several different target values including two sets of observations. The observations used were the global average reflected shortwave radiation (RSR) and outgoing longwave radiation (OLR) from the Clouds and the Earth's Radiant Energy System instruments combined with observations of ocean heat content. Using the same method, configurations were also generated that were consistent with the earlier Earth Radiation Budget Experiment results. Many, though not all, tuning experiments were successful, with about 2500 configurations being generated and the changes in simulated outgoing radiation largely due to changes in clouds. Clear-sky radiation changes were small, largely due to a cancellation between changes in upper-tropospheric relative humidity and temperature. Changes in other climate variables are strongly related to changes in OLR and RSR particularly on large scales. There appears to be some equifinality with different parameter configurations producing OLR and RSR values close to observed values. These models have small differences in their climatology with the one group being similar to the standard configuration and the other group drier in the tropics and warmer everywhere.
APA, Harvard, Vancouver, ISO, and other styles
27

Lu, Jun, Yun Wang, Jingyi Chen, and Ying An. "Joint anisotropic amplitude variation with offset inversion of PP and PS seismic data." GEOPHYSICS 83, no. 2 (March 1, 2018): N31—N50. http://dx.doi.org/10.1190/geo2016-0516.1.

Full text
Abstract:
With the increase in exploration target complexity, more parameters are required to describe subsurface properties, particularly for finely stratified reservoirs with vertical transverse isotropic (VTI) features. We have developed an anisotropic amplitude variation with offset (AVO) inversion method using joint PP and PS seismic data for VTI media. Dealing with local minimum solutions is critical when using anisotropic AVO inversion because more parameters are expected to be derived. To enhance the inversion results, we adopt a hierarchical inversion strategy to solve the local minimum solution problem in the Gauss-Newton method. We perform the isotropic and anisotropic AVO inversions in two stages; however, we only use the inversion results from the first stage to form search windows for constraining the inversion in the second stage. To improve the efficiency of our method, we built stop conditions using Euclidean distance similarities to control iteration of the anisotropic AVO inversion in noisy situations. In addition, we evaluate a time-aligned amplitude variation with angle gather generation approach for our anisotropic AVO inversion using anisotropic prestack time migration. We test the proposed method on synthetic data in ideal and noisy situations, and find that the anisotropic AVO inversion method yields reasonable inversion results. Moreover, we apply our method to field data to show that it can be used to successfully identify complex lithologic and fluid information regarding fine layers in reservoirs.
APA, Harvard, Vancouver, ISO, and other styles
28

Yang, Jianjian, Chao Wang, Wenjie Luo, Yuchen Zhang, Boshen Chang, and Miao Wu. "Research on Point Cloud Registering Method of Tunneling Roadway Based on 3D NDT-ICP Algorithm." Sensors 21, no. 13 (June 29, 2021): 4448. http://dx.doi.org/10.3390/s21134448.

Full text
Abstract:
In order to meet the needs of intelligent perception of the driving environment, a point cloud registering method based on 3D NDT-ICP algorithm is proposed to improve the modeling accuracy of tunneling roadway environments. Firstly, Voxel Grid filtering method is used to preprocess the point cloud of tunneling roadways to maintain the overall structure of the point cloud and reduce the number of point clouds. After that, the 3D NDT algorithm is used to solve the coordinate transformation of the point cloud in the tunneling roadway and the cell resolution of the algorithm is optimized according to the environmental features of the tunneling roadway. Finally, a kd-tree is introduced into the ICP algorithm for point pair search, and the Gauss–Newton method is used to optimize the solution of nonlinear objective function of the algorithm to complete accurate registering of tunneling roadway point clouds. The experimental results show that the 3D NDT algorithm can meet the resolution requirement when the cell resolution is set to 0.5 m under the condition of processing the point cloud with the environmental features of tunneling roadways. At this time, the registering time is the shortest. Compared with the NDT algorithm, ICP algorithm and traditional 3D NDT-ICP algorithm, the registering speed of the 3D NDT-ICP algorithm proposed in this paper is obviously improved and the registering error is smaller.
APA, Harvard, Vancouver, ISO, and other styles
29

Sayyafzadeh, Mohammad, Manouchehr Haghighi, Keivan Bolouri, and Elaheh Arjomand. "Reservoir characterisation using artificial bee colony optimisation." APPEA Journal 52, no. 1 (2012): 115. http://dx.doi.org/10.1071/aj11009.

Full text
Abstract:
To obtain an accurate estimation of reservoir performance, the reservoir should be properly characterised. One of the main stages of reservoir characterisation is the calibration of rock property distributions with flow performance observation, which is known as history matching. The history matching procedure consists of three distinct steps: parameterisation, regularisation and optimisation. In this study, a Bayesian framework and a pilot-point approach for regularisation and parameterisation are used. The major focus of this paper is optimisation, which plays a crucial role in the reliability and quality of history matching. Several optimisation methods have been studied for history matching, including genetic algorithm (GA), ant colony, particle swarm (PS), Gauss-Newton, Levenberg-Marquardt and Limited-memory, Broyden-Fletcher-Goldfarb-Shanno. One of the most recent optimisation algorithms used in different fields is artificial bee colony (ABC). In this study, the application of ABC in history matching is investigated for the first time. ABC is derived from the intelligent foraging behaviour of honey bees. A colony of honey bees is comprised of employed bees, onlookers and scouts. Employed bees look for food sources based on their knowledge, onlookers make decisions for foraging using employed bees’ observations, and scouts search for food randomly. To investigate the application of ABC in history matching, its results for two different synthetic cases are compared with the outcomes of three different optimisation methods: real-valued GA, simulated annealing (SA), and pre-conditioned steepest descent. In the first case, history matching using ABC afforded a better result than GA and SA. ABC reached a lower fitness value in a reasonable number of evaluations, which indicates the performance and execution-time capability of the method. ABC did not appear as efficient as PSD in the first case. In the second case, SA and PDS did not perform acceptably. GA achieved a better result in comparison to SA and PSD, but its results were not as superior as ABC’s. ABC is not concerned with the shape of the landscape; that is, whether it is smooth or rugged. Since there is no precise information about the landscape shape of the history matching function, it can be concluded that by using ABC, there is a high chance of providing high-quality history matching and reservoir characterisation.
APA, Harvard, Vancouver, ISO, and other styles
30

Siripunvaraporn, Weerachai, and Gary Egbert. "An efficient data‐subspace inversion method for 2-D magnetotelluric data." GEOPHYSICS 65, no. 3 (May 2000): 791–803. http://dx.doi.org/10.1190/1.1444778.

Full text
Abstract:
There are currently three types of algorithms in use for regularized 2-D inversion of magnetotelluric (MT) data. All seek to minimize some functional which penalizes data misfit and model structure. With the most straight‐forward approach (exemplified by OCCAM), the minimization is accomplished using some variant on a linearized Gauss‐Newton approach. A second approach is to use a descent method [e.g., nonlinear conjugate gradients (NLCG)] to avoid the expense of constructing large matrices (e.g., the sensitivity matrix). Finally, approximate methods [e.g., rapid relaxation inversion (RRI)] have been developed which use cheaply computed approximations to the sensitivity matrix to search for a minimum of the penalty functional. Approximate approaches can be very fast, but in practice often fail to converge without significant expert user intervention. On the other hand, the more straightforward methods can be prohibitively expensive to use for even moderate‐size data sets. Here, we present a new and much more efficient variant on the OCCAM scheme. By expressing the solution as a linear combination of rows of the sensitivity matrix smoothed by the model covariance (the “representers”), we transform the linearized inverse problem from the M-dimensional model space to the N-dimensional data space. This method is referred to as DASOCC, the data space OCCAM’s inversion. Since generally N ≪ M, this transformation by itself can result in significant computational saving. More importantly the data space formulation suggests a simple approximate method for constructing the inverse solution. Since MT data are smooth and “redundant,” a subset of the representers is typically sufficient to form the model without significant loss of detail. Computations required for constructing sensitivities and the size of matrices to be inverted can be significantly reduced by this approximation. We refer to this inversion as REBOCC, the reduced basis OCCAM’s inversion. Numerical experiments on synthetic and real data sets with REBOCC, DASOCC, NLCG, RRI, and OCCAM show that REBOCC is faster than both DASOCC and NLCG, which are comparable in speed. All of these methods are significantly faster than OCCAM, but are not competitive with RRI. However, even with a simple synthetic data set, we could not always get RRI to converge to a reasonable solution. The basic idea behind REBOCC should be more broadly applicable, in particular to 3-D MT inversion.
APA, Harvard, Vancouver, ISO, and other styles
31

de Ruijter, W. J., and Sharma Renu. "Precise on-line Measurement of Lattice Vectors." Proceedings, annual meeting, Electron Microscopy Society of America 48, no. 1 (August 12, 1990): 530–31. http://dx.doi.org/10.1017/s0424820100181403.

Full text
Abstract:
Established methods for measurement of lattice spacings and angles of crystalline materials include x-ray diffraction, microdiffraction and HREM imaging. Structural information from HREM images is normally obtained off-line with the traveling table microscope or by the optical diffractogram technique. We present a new method for precise measurement of lattice vectors from HREM images using an on-line computer connected to the electron microscope. It has already been established that an image of crystalline material can be represented by a finite number of sinusoids. The amplitude and the phase of these sinusoids are affected by the microscope transfer characteristics, which are strongly influenced by the settings of defocus, astigmatism and beam alignment. However, the frequency of each sinusoid is solely a function of overall magnification and periodicities present in the specimen. After proper calibration of the overall magnification, lattice vectors can be measured unambiguously from HREM images.Measurement of lattice vectors is a statistical parameter estimation problem which is similar to amplitude, phase and frequency estimation of sinusoids in 1-dimensional signals as encountered, for example, in radar, sonar and telecommunications. It is important to properly model the observations, the systematic errors and the non-systematic errors. The observations are modelled as a sum of (2-dimensional) sinusoids. In the present study the components of the frequency vector of the sinusoids are the only parameters of interest. Non-systematic errors in recorded electron images are described as white Gaussian noise. The most important systematic error is geometric distortion. Lattice vectors are measured using a two step procedure. First a coarse search is obtained using a Fast Fourier Transform on an image section of interest. Prior to Fourier transformation the image section is multiplied with a window, which gradually falls off to zero at the edges. The user indicates interactively the periodicities of interest by selecting spots in the digital diffractogram. A fine search for each selected frequency is implemented using a bilinear interpolation, which is dependent on the window function. It is possible to refine the estimation even further using a non-linear least squares estimation. The first two steps provide the proper starting values for the numerical minimization (e.g. Gauss-Newton). This third step increases the precision with 30% to the highest theoretically attainable (Cramer and Rao Lower Bound). In the present studies we use a Gatan 622 TV camera attached to the JEM 4000EX electron microscope. Image analysis is implemented on a Micro VAX II computer equipped with a powerful array processor and real time image processing hardware. The typical precision, as defined by the standard deviation of the distribution of measurement errors, is found to be <0.003Å measured on single crystal silicon and <0.02Å measured on small (10-30Å) specimen areas. These values are ×10 times larger than predicted by theory. Furthermore, the measured precision is observed to be independent on signal-to-noise ratio (determined by the number of averaged TV frames). Obviously, the precision is restricted by geometric distortion mainly caused by the TV camera. For this reason, we are replacing the Gatan 622 TV camera with a modern high-grade CCD-based camera system. Such a system not only has negligible geometric distortion, but also high dynamic range (>10,000) and high resolution (1024x1024 pixels). The geometric distortion of the projector lenses can be measured, and corrected through re-sampling of the digitized image.
APA, Harvard, Vancouver, ISO, and other styles
32

Jonas, Peter. "Design criteria for geometrical calibration phantoms in fan and cone beam CT systems." Journal of Inverse and Ill-posed Problems 26, no. 6 (December 1, 2018): 729–53. http://dx.doi.org/10.1515/jiip-2017-0084.

Full text
Abstract:
Abstract Image quality in tomographic applications depends strongly on the precise knowledge of the geometrical parameters of x-ray source and detector. However, in some situations these geometrical data are not immediately available. One way to overcome this problem is to use calibration phantoms which consist of several opaque markers in a known geometry. A main question is what properties are needed in order to reliably determine the searched for geometry data. In this paper we give sufficient conditions for the calibration phantom such that the reconstruction problem has a unique solution. We also use our theoretical approach to derive a numerical method which can determine the needed geometry data. Our analyses show that this numerical method is stable and that the solutions are as good as those of standard nonlinear procedures like Gauss–Newton-type methods. Furthermore, our new algorithm is much faster than standard methods and it also does not depend on initial values.
APA, Harvard, Vancouver, ISO, and other styles
33

Albreem, Mahmoud A., Mohammed H. Alsharif, and Sunghwan Kim. "Impact of Stair and Diagonal Matrices in Iterative Linear Massive MIMO Uplink Detectors for 5G Wireless Networks." Symmetry 12, no. 1 (January 2, 2020): 71. http://dx.doi.org/10.3390/sym12010071.

Full text
Abstract:
In massive multiple-input multiple-output (M-MIMO) systems, a detector based on maximum likelihood (ML) algorithm attains optimum performance, but it exhaustively searches all possible solutions, hence, it has a very high complexity and realization is denied. Linear detectors are an alternative solution because of low complexity and simplicity in implementation. Unfortunately, they culminate in a matrix inversion that increases the computational complexity in high loaded systems. Therefore, several iterative methods have been proposed to approximate or avoid the matrix inversion, such as the Neuamnn series (NS), Newton iterations (NI), successive overrelaxation (SOR), Gauss–Siedel (GS), Jacobi (JA), and Richardson (RI) methods. However, a detector based on iterative methods requires a pre-processing and initialization where good initialization impresses the convergence, the performance, and the complexity. Most of the existing iterative linear detectors are using a diagonal matrix ( D ) in initialization because the equalization matrix is almost diagonal. This paper studies the impact of utilizing a stair matrix ( S ) instead of D in initializing the linear M-MIMO uplink (UL) detector. A comparison between iterative linear M-MIMO UL detectors with D and S is presented in performance and computational complexity. Numerical Results show that utilization of S achieves the target performance within few iterations, and, hence, the computational complexity is reduced. A detector based on the GS and S achieved a satisfactory bit-error-rate (BER) with the lowest complexity.
APA, Harvard, Vancouver, ISO, and other styles
34

Rjeb, Skandar, Ahmed Hannachi, and Ratal Abdelhamid. "Hydrodynamic Investigation in an Annular Reactor with Mixing Time and Residence Time Distribution: Flow Rates Estimation with the Gauss-Newton Gradient Technique." Chemical Product and Process Modeling 6, no. 1 (July 2, 2011). http://dx.doi.org/10.2202/1934-2659.1539.

Full text
Abstract:
In this work, flow patterns within an annular chemical reactor were characterized. The reactor was modeled by a cascade of communicating Continuous Stirred Tank Reactors (CSTRs) exchanging flow rates of variable intensities. Mixing time and Residence Time Distribution measurements were used as basis for flow modeling. A Matlab computer code has been developed to predict the exchanged flow rates through the minimization of an objective function. This paper describes the parameter estimation technique which is based on the Gauss-Newton method with a linear search algorithm. Only two opposite flow rates between reactor compartments were assumed and were identified for various mixing conditions. For the studied cases, the predicted responses were close to the experimental measurements.
APA, Harvard, Vancouver, ISO, and other styles
35

Parra Oller, Isabel María, Salvador Cruz Rambaud, and María del Carmen Valls Martínez. "Discount models in intertemporal choice: an empirical analysis." European Journal of Management and Business Economics ahead-of-print, ahead-of-print (April 9, 2020). http://dx.doi.org/10.1108/ejmbe-01-2019-0003.

Full text
Abstract:
PurposeThe main purpose of this paper is to determine the discount function which better fits the individuals' preferences through the empirical analysis of the different functions used in the field of intertemporal choice.Design/methodology/approachAfter an in-depth revision of the existing literature and unlike most studies which only focus on exponential and hyperbolic discounting, this manuscript compares the adjustment of data to six different discount functions. To do this, the analysis is based on the usual statistical methods, and the non-linear least squares regression, through the algorithm of Gauss-Newton, in order to estimate the models' parameters; finally, the AICc method is used to compare the significance of the six proposed models.FindingsThis paper shows that the so-called q-exponential function deformed by the amount is the model which better explains the individuals' preferences on both delayed gains and losses. To the extent of the authors' knowledge, this is the first time that a function different from the general hyperbola fits better to the individuals' preferences.Originality/valueThis paper contributes to the search of an alternative model able to explain the individual behavior in a more realistic way.
APA, Harvard, Vancouver, ISO, and other styles
36

Aghamiry, Hossein S., Ali Gholami, and Stéphane Operto. "On efficient frequency-domain full-waveform inversion with extended search space." GEOPHYSICS, December 20, 2020, 1–61. http://dx.doi.org/10.1190/geo2020-0478.1.

Full text
Abstract:
Efficient frequency-domain Full-Waveform Inversion (FD-FWI) of wide-aperture data is designed by limiting inversion to few frequencies and by solving the Helmholtz equation with a direct solver to process multiple sources efficiently. Some variants of FD-FWI, which process the wave-equation as a weak constraint, were proposed to increase the computational efficiency or extend the search space. Among them, the contrast-source reconstruction inversion (CSRI) reparametrizes FD-FWI in terms of contrast sources (CS) and contrasts and update them in an alternating mode.This reparametrization allows for one lower-upper (LU) decomposition of the Helmholtz operator to be performed per frequency inversion hence further improving the computational efficiency of FD-FWI.On the other hand, Iteratively-refined Wavefield Reconstruction Inversion (IR-WRI) relies on the alternating-direction method of multipliers (ADMM) to extend the search space by matching the data from the early iterations via an aggressive relaxation of the wave-equation while satisfying it at the convergence point thanks to the defect correction performed by the Lagrange multipliers. In contrast to CSRI, IR-WRI requires to redo one LU decomposition when the subsurface model is updated.In both methods, the CSs or the wavefields are computed by solving in a least-squares sense an overdetermined linear system gathering an observation equation and a wave-equation.A drawback of CSRI is that CSs are estimated approximately with one iteration of a conjugate gradient method, while the wavefields are reconstructed exactly by IR-WRI with a Gauss-Newton method. We combine the benefits of CSRI and IR-WRI to decrease the number of LU decomposition during IR-WRI with a fixed-point (FP) algorithm while preserving its search space extension capability. Application on the 2D complex Marmousi and the BP salt models shows that our FP-based IR-WRI manages to reconstruct these models as accurately as the classical IR-WRI while reducing the number of LU factorizations considerably.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhang, Guanglu, Douglas Allaire, and Jonathan Cagan. "Taking the Guess Work Out of the Initial Guess: A Solution Interval Method for Least-Squares Parameter Estimation in Nonlinear Models." Journal of Computing and Information Science in Engineering 21, no. 2 (December 10, 2020). http://dx.doi.org/10.1115/1.4048811.

Full text
Abstract:
Abstract Fitting a specified model to data is critical in many science and engineering fields. A major task in fitting a specified model to data is to estimate the value of each parameter in the model. Iterative local methods, such as the Gauss–Newton method and the Levenberg–Marquardt method, are often employed for parameter estimation in nonlinear models. However, practitioners must guess the initial value for each parameter to initialize these iterative local methods. A poor initial guess can contribute to non-convergence of these methods or lead these methods to converge to a wrong or inferior solution. In this paper, a solution interval method is introduced to find the optimal estimator for each parameter in a nonlinear model that minimizes the squared error of the fit. To initialize this method, it is not necessary for practitioners to guess the initial value of each parameter in a nonlinear model. The method includes three algorithms that require different levels of computational power to find the optimal parameter estimators. The method constructs a solution interval for each parameter in the model. These solution intervals significantly reduce the search space for optimal parameter estimators. The method also provides an empirical probability distribution for each parameter, which is valuable for parameter uncertainty assessment. The solution interval method is validated through two case studies in which the Michaelis–Menten model and Fick’s second law are fit to experimental data sets, respectively. These case studies show that the solution interval method can find optimal parameter estimators efficiently. A four-step procedure for implementing the solution interval method in practice is also outlined.
APA, Harvard, Vancouver, ISO, and other styles
38

Chen, Yousheng, Vahid Yaghoubi, Andreas Linderholt, and Thomas J. S. Abrahamsson. "Informative Data for Model Calibration of Locally Nonlinear Structures Based on Multiharmonic Frequency Responses." Journal of Computational and Nonlinear Dynamics 11, no. 5 (June 2, 2016). http://dx.doi.org/10.1115/1.4033608.

Full text
Abstract:
In industry, linear finite element (FE) models commonly serve as baseline models to represent the global structural dynamics behavior. However, available test data may show evidence of significant nonlinear characteristics. In such a case, the baseline linear model may be insufficient to represent the dynamics of the structure. The causes of the nonlinear characteristics may be local in nature and the remaining parts of the structure may be satisfactorily represented by linear descriptions. Although the baseline model can then serve as a good foundation, the physical phenomena needed to substantially increase the model's capability of representing the real structure are most likely not modeled in it. Therefore, a set of candidate parameters to control the nonlinear effects have to be added and subjected to calibration to form a credible model. An overparameterized model for calibration may results in parameter value estimates that do not survive a validation test. The parameterization is coupled to the test data and should be chosen so that the expected covariance matrix of the parameter estimates is made small. Accurate test data, suitable for calibration, is often obtained from sinusoidal testing. Because a pure monosinusoidal excitation is difficult to achieve during a physical test of a nonlinear structure, a multisinusoidal excitation is here designed. In this paper, synthetic test data from a model of a nonlinear benchmark structure are used for illustration. The steady-state solutions of the nonlinear system are found using the multiharmonic balance (MHB) method. The steady-state responses at the side frequencies are shown to contain valuable information for the calibration process that can improve the accuracy of the parameters' estimates. The model calibration made and the associated κ-fold cross-validation used is based on the Levenberg–Marquardt and the undamped Gauss–Newton algorithm, respectively. Starting seed candidates for calibration are found by the Latin hypercube sampling method. The candidate that gives the smallest deviation to test data is selected as a starting point for the iterative search for a calibration solution. The calibration result shows good agreement with the true parameter setting and the κ-fold cross validation result shows that the variances of the estimated parameters shrink when multiharmonics nonlinear frequency response functions (FRFs) are included in the data used for calibration.
APA, Harvard, Vancouver, ISO, and other styles
39

Yang, Fan, Wei He, Tao Chen, Xiaochu Luo, and Yongchang Fu. "An Electroscope System Based on Electric Field Measurement Method for UHV Transmission Lines." International Journal of Emerging Electric Power Systems 10, no. 2 (April 23, 2009). http://dx.doi.org/10.2202/1553-779x.2154.

Full text
Abstract:
The paper describes an electric field measurement method based electroscope system to check the electrification state of ultra-high voltage transmission lines, which is composed of three parts: 1) Measuring terminal; 2) Central sever; 3) GPRS and Internet network. The measuring terminal was used to measure the electric field and the location of the measuring points, then the measured data was sent to the central sever by GPRS and Internet network, and requested for an electricity state confirmation.When the sever received a request from a terminal, the electric fields and locations of the measuring points were obtained first, then according to the location of the measuring points, the server searches the corresponding objective transmission lines in the database and read their parameters. According to the parameters of the measuring points and transmission lines, a calculation would be carried out to confirm the electrification state of the transmission lines. For the confirmation calculation, equations for the electric field inverse problem of the transmission lines were set up first, then global regularization and damped Gauss Newton (DGN) method were used to solve the inverse problem.A 500kV double loops transmission line was taken as an example to verify the validity of this method. The electric field and location of 11 measuring points were measured by the measuring terminal firstly, and then sent to the central sever. Electrification state was confirmed by the central sever.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography