To see the other types of publications on this topic, follow the link: Implicit gradient reconstruction.

Journal articles on the topic 'Implicit gradient reconstruction'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 19 journal articles for your research on the topic 'Implicit gradient reconstruction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Singh, Manish K., N. Munikrishna, V. Ramesh, and N. Balakrishnan. "Implicit gradient reconstruction (IGR) method for compressible flow simulation." Journal of Physics: Conference Series 822 (April 11, 2017): 012030. http://dx.doi.org/10.1088/1742-6596/822/1/012030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dahlke, Taylor, Biondo Biondi, and Robert Clapp. "Applied 3D salt body reconstruction using shape optimization with level sets." GEOPHYSICS 85, no. 5 (September 1, 2020): R437—R446. http://dx.doi.org/10.1190/geo2019-0352.1.

Full text
Abstract:
As oil and gas extraction becomes more advanced, deep-water exploration becomes increasingly focused on imaging near or under complex salt geology, which necessitates detailed velocity models with strong contrast interfaces. These interfaces can be elegantly tracked using the level sets of an implicit surface. One can invert for the velocity model that best fits the recorded data in a full-waveform inversion (FWI) style objective function by reparameterizing the model in terms of an implicit surface representation of the salt interface. With this parameterization of the FWI objective function, we find the Hessian and solve a conjugate gradient system for the Newton step at every nonlinear iteration. We sparsify the representation of the implicit surface using radial basis functions, which can hasten convergence of the inner inversion by reducing the number of model parameters. We have developed a guided inversion approach that embeds information about the certainty of different salt boundary regions by the initialization of the implicit surface slope at the salt interface. This can help guide the inversion away from perceived local minima. The results of testing this inversion workflow on a 3D Gulf of Mexico data set show that it can be a useful tool for refining salt models because the initial and final seismic images show clearer and more consistent features below the updated salt area.
APA, Harvard, Vancouver, ISO, and other styles
3

Boscheri, Walter, Maurizio Tavelli, and Nicola Paoluzzi. "High order Finite Difference/Discontinuous Galerkin schemes for the incompressible Navier-Stokes equations with implicit viscosity." Communications in Applied and Industrial Mathematics 13, no. 1 (January 1, 2022): 21–38. http://dx.doi.org/10.2478/caim-2022-0003.

Full text
Abstract:
Abstract In this work we propose a novel numerical method for the solution of the incompressible Navier-Stokes equations on Cartesian meshes in 3D. The semi-discrete scheme is based on an explicit discretization of the nonlinear convective flux tensor and an implicit treatment of the pressure gradient and viscous terms. In this way, the momentum equation is formally substituted into the divergence-free constraint, thus obtaining an elliptic equation on the pressure which eventually maintains at the discrete level the involution on the divergence of the velocity field imposed by the governing equations. This makes our method belonging to the class of so-called structure-preserving schemes. High order of accuracy in space is achieved using an efficient CWENO reconstruction operator that is exploited to devise a conservative finite difference scheme for the convective terms. Implicit central finite differences are used to remove the numerical dissipation in the pressure gradient discretization. To avoid the severe time step limitation induced by the viscous eigenvalues related to the parabolic terms in the governing equations, we propose to devise an implicit local discontinuous Galerkin (DG) solver. The resulting viscous sub-system is symmetric and positive definite, therefore it can be efficiently solved at the aid of a matrix-free conjugate gradient method. High order in time is granted by a semi-implicit IMEX time stepping technique. Convergence rates up to third order of accuracy in space and time are proven, and a suite of academic benchmarks is shown in order to demonstrate the robustness and the validity of the novel schemes, especially in the context of high viscosity coefficients.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Chong, and Guoliang Xu. "Gradient-flow-based semi-implicit finite-element method and its convergence analysis for image reconstruction." Inverse Problems 28, no. 3 (February 24, 2012): 035006. http://dx.doi.org/10.1088/0266-5611/28/3/035006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhu, Xiangyuan, Kehua Guo, Hui Fang, Rui Ding, Zheng Wu, and Gerald Schaefer. "Gradient-Based Graph Attention for Scene Text Image Super-resolution." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 3 (June 26, 2023): 3861–69. http://dx.doi.org/10.1609/aaai.v37i3.25499.

Full text
Abstract:
Scene text image super-resolution (STISR) in the wild has been shown to be beneficial to support improved vision-based text recognition from low-resolution imagery. An intuitive way to enhance STISR performance is to explore the well-structured and repetitive layout characteristics of text and exploit these as prior knowledge to guide model convergence. In this paper, we propose a novel gradient-based graph attention method to embed patch-wise text layout contexts into image feature representations for high-resolution text image reconstruction in an implicit and elegant manner. We introduce a non-local group-wise attention module to extract text features which are then enhanced by a cascaded channel attention module and a novel gradient-based graph attention module in order to obtain more effective representations by exploring correlations of regional and local patch-wise text layout properties. Extensive experiments on the benchmark TextZoom dataset convincingly demonstrate that our method supports excellent text recognition and outperforms the current state-of-the-art in STISR. The source code is available at https://github.com/xyzhu1/TSAN.
APA, Harvard, Vancouver, ISO, and other styles
6

Strauss, Thilo, and Taufiquar Khan. "Implicit Solutions of the Electrical Impedance Tomography Inverse Problem in the Continuous Domain with Deep Neural Networks." Entropy 25, no. 3 (March 13, 2023): 493. http://dx.doi.org/10.3390/e25030493.

Full text
Abstract:
Electrical impedance tomography (EIT) is a non-invasive imaging modality used for estimating the conductivity of an object Ω from boundary electrode measurements. In recent years, researchers achieved substantial progress in analytical and numerical methods for the EIT inverse problem. Despite the success, numerical instability is still a major hurdle due to many factors, including the discretization error of the problem. Furthermore, most algorithms with good performance are relatively time consuming and do not allow real-time applications. In our approach, the goal is to separate the unknown conductivity into two regions, namely the region of homogeneous background conductivity and the region of non-homogeneous conductivity. Therefore, we pose and solve the problem of shape reconstruction using machine learning. We propose a novel and simple jet intriguing neural network architecture capable of solving the EIT inverse problem. It addresses previous difficulties, including instability, and is easily adaptable to other ill-posed coefficient inverse problems. That is, the proposed model estimates the probability for a point of whether the conductivity belongs to the background region or to the non-homogeneous region on the continuous space Rd∩Ω with d∈{2,3}. The proposed model does not make assumptions about the forward model and allows for solving the inverse problem in real time. The proposed machine learning approach for shape reconstruction is also used to improve gradient-based methods for estimating the unknown conductivity. In this paper, we propose a piece-wise constant reconstruction method that is novel in the inverse problem setting but inspired by recent approaches from the 3D vision community. We also extend this method into a novel constrained reconstruction method. We present extensive numerical experiments to show the performance of the architecture and compare the proposed method with previous analytic algorithms, mainly the monotonicity-based shape reconstruction algorithm and iteratively regularized Gauss–Newton method.
APA, Harvard, Vancouver, ISO, and other styles
7

Zang, Miao, Huimin Xu, and Yongmei Zhang. "Kernel-Based Multiview Joint Sparse Coding for Image Annotation." Mathematical Problems in Engineering 2017 (2017): 1–11. http://dx.doi.org/10.1155/2017/6727105.

Full text
Abstract:
It remains a challenging task for automatic image annotation problem due to the semantic gap between visual features and semantic concepts. To reduce the gap, this paper puts forward a kernel-based multiview joint sparse coding (KMVJSC) framework for image annotation. In KMVJSC, different visual features as well as label information are considered as distinct views and are mapped to an implicit kernel space, in which the original nonlinear separable data become linearly separable. Then, all the views are integrated into a multiview joint sparse coding framework aiming to find a set of optimal sparse representations and discriminative dictionaries adaptively, which can effectively employ the complementary information of different views. An optimization algorithm is presented by extending K-singular value decomposition (KSVD) and accelerated proximal gradient (APG) algorithms to the kernel multiview framework. In addition, a label propagation scheme using the sparse reconstruction and weighted greedy label transfer algorithm is also proposed. Comparative experiments on three datasets have demonstrated the competitiveness of proposed approach compared with other related methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Bates, Oscar, Lluis Guasch, George Strong, Thomas Caradoc Robins, Oscar Calderon-Agudo, Carlos Cueto, Javier Cudeiro, and Mengxing Tang. "A probabilistic approach to tomography and adjoint state methods, with an application to full waveform inversion in medical ultrasound." Inverse Problems 38, no. 4 (March 14, 2022): 045008. http://dx.doi.org/10.1088/1361-6420/ac55ee.

Full text
Abstract:
Abstract Bayesian methods are a popular research direction for inverse problems. There are a variety of techniques available to solve Bayes’ equation, each with their own strengths and limitations. Here, we discuss stochastic variational inference (SVI), which solves Bayes’ equation using gradient-based methods. This is important for applications which are time-limited (e.g. medical tomography) or where solving the forward problem is expensive (e.g. adjoint methods). To evaluate the use of SVI in both these contexts, we apply it to ultrasound tomography of the brain using full-waveform inversion (FWI). FWI is a computationally expensive adjoint method for solving the ultrasound tomography inverse problem, and we demonstrate that SVI can be used to find a no-cost estimate of the pixel-wise variance of the sound-speed distribution using a mean-field Gaussian approximation. In other words, we show experimentally that it is possible to estimate the pixel-wise uncertainty of the sound-speed reconstruction using SVI and a common approximation which is already implicit in other types of iterative reconstruction. Uncertainty estimates have a variety of uses in adjoint methods and tomography. As an illustrative example, we focus on the use of uncertainty for image quality assessment. This application is not limiting; our variance estimator has effectively no computational cost and we expect that it will have applications in fields such as non-destructive testing or aircraft component design where uncertainties may not be routinely estimated.
APA, Harvard, Vancouver, ISO, and other styles
9

Bi, Sheng, Jianzhong Zhou, Yi Liu, and Lixiang Song. "A Finite Volume Method for Modeling Shallow Flows with Wet-Dry Fronts on Adaptive Cartesian Grids." Mathematical Problems in Engineering 2014 (2014): 1–20. http://dx.doi.org/10.1155/2014/209562.

Full text
Abstract:
A second-order accurate, Godunov-type upwind finite volume method on dynamic refinement grids is developed in this paper for solving shallow-water equations. The advantage of this grid system is that no data structure is needed to store the neighbor information, since neighbors are directly specified by simple algebraic relationships. The key ingredient of the scheme is the use of the prebalanced shallow-water equations together with a simple but effective method to track the wet/dry fronts. In addition, a second-order spatial accuracy in space and time is achieved using a two-step unsplit MUSCL-Hancock method and a weighted surface-depth gradient method (WSDM) which considers the local Froude number is proposed for water depths reconstruction. The friction terms are solved by a semi-implicit scheme that can effectively prevent computational instability from small depths and does not invert the direction of velocity components. Several benchmark tests and a dam-break flooding simulation over real topography cases are used for model testing and validation. Results show that the proposed model is accurate and robust and has advantages when it is applied to simulate flow with local complex topographic features or flow conditions and thus has bright prospects of field-scale application.
APA, Harvard, Vancouver, ISO, and other styles
10

PADOVANI, E., E. PRIOLO, and G. SERIANI. "LOW AND HIGH ORDER FINITE ELEMENT METHOD: EXPERIENCE IN SEISMIC MODELING." Journal of Computational Acoustics 02, no. 04 (December 1994): 371–422. http://dx.doi.org/10.1142/s0218396x94000233.

Full text
Abstract:
The finite element method (FEM) is a numerical technique well suited to solving problems of elastic wave propagation in complex geometries and heterogeneous media. The main advantages are that very irregular grids can be used, free surface boundary conditions can be easily taken into account, a good reconstruction is possible of irregular surface topography, and complex geometries, such as curved, dipping and rough interfaces, intrusions, cusps, and holes can be defined. The main drawbacks of the classical approach are the need for a large amount of memory, low computational efficiency, and the possible appearance of spurious effects. In this paper we describe some experience in improving the computational efficiency of a finite element code based on a global approach, and used for seismic modeling in geophysical oil exploration. Results from the use of different methods and models run on a mini-superworkstation APOLLO DN10000 are reported and compared. With Chebyshev spectral elements, great accuracy can be reached with almost no numerical artifacts. Static condensation of the spectral element's internal nodes dramatically reduces memory requirements and CPU time. Time integration performed with the classical implicit Newmark scheme is very accurate but not very efficient. Due to the high sparsity of the matrices, the use of compressed storage is shown to greatly reduce not only memory requirements but also computing time. The operation which most affects the performance is the matrix-by-vector product; an effective programming of this subroutine for the storage technique used is decisive. The conjugate gradient method preconditioned by incomplete Cholesky factorization provides, in general, a good compromise between efficiency and memory requirements. Spectral elements greatly increase its efficiency, since the number of iterations is reduced. The most efficient and accurate method is a hybrid iterative-direct solution of the linear system arising from the static condensation of high order elements. The size of 2D models that can be handled in a reasonable time on this kind of computer is nowadays hardly sufficient, and significant 3D modeling is completely unfeasible. However the introduction of new FEM algorithms coupled with the use of new computer architectures is encouraging for the future.
APA, Harvard, Vancouver, ISO, and other styles
11

Du, Tao, Kui Wu, Pingchuan Ma, Sebastien Wah, Andrew Spielberg, Daniela Rus, and Wojciech Matusik. "DiffPD: Differentiable Projective Dynamics." ACM Transactions on Graphics 41, no. 2 (April 30, 2022): 1–21. http://dx.doi.org/10.1145/3490168.

Full text
Abstract:
We present a novel, fast differentiable simulator for soft-body learning and control applications. Existing differentiable soft-body simulators can be classified into two categories based on their time integration methods: Simulators using explicit timestepping schemes require tiny timesteps to avoid numerical instabilities in gradient computation, and simulators using implicit time integration typically compute gradients by employing the adjoint method and solving the expensive linearized dynamics. Inspired by Projective Dynamics ( PD ), we present Differentiable Projective Dynamics ( DiffPD ), an efficient differentiable soft-body simulator based on PD with implicit time integration. The key idea in DiffPD is to speed up backpropagation by exploiting the prefactorized Cholesky decomposition in forward PD simulation. In terms of contact handling, DiffPD supports two types of contacts: a penalty-based model describing contact and friction forces and a complementarity-based model enforcing non-penetration conditions and static friction. We evaluate the performance of DiffPD and observe it is 4–19 times faster compared with the standard Newton’s method in various applications including system identification, inverse design problems, trajectory optimization, and closed-loop control. We also apply DiffPD in a reality-to-simulation ( real-to-sim ) example with contact and collisions and show its capability of reconstructing a digital twin of real-world scenes.
APA, Harvard, Vancouver, ISO, and other styles
12

Murthy, J. Y., and S. R. Mathur. "Computation of Anisotropic Conduction Using Unstructured Meshes." Journal of Heat Transfer 120, no. 3 (August 1, 1998): 583–91. http://dx.doi.org/10.1115/1.2824315.

Full text
Abstract:
This paper presents a conservative finite volume scheme for computing conduction heat transfer in materials with anisotropic conductivity. Unstructured solution-adaptive meshes composed of arbitrary convex polyhedra are used. Discrete energy balances are written over these polyhedra. Temperature gradients required for the evaluation of secondary diffusion fluxes are found by linear reconstruction. A fully implicit scheme is used for unsteady problems. The resulting discrete equations are solved using an algebraic multigrid scheme. Schemes for hanging-node and conformal adaption are implemented. Computations are performed using a variety of triangular and quadrilateral meshes. The results are compared to published analytical and numerical solutions and are shown to be satisfactory.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhong, Jiaping, Yunsong Li, Weiying Xie, Jie Lei, and Xiuping Jia. "Multi-Prior Twin Least-Square Network for Anomaly Detection of Hyperspectral Imagery." Remote Sensing 14, no. 12 (June 15, 2022): 2859. http://dx.doi.org/10.3390/rs14122859.

Full text
Abstract:
Anomaly detection of hyperspectral imagery (HSI) identifies the very few samples that do not conform to an intricate background without priors. Despite the extensive success of hyperspectral interpretation techniques based on generative adversarial networks (GANs), applying trained GAN models to hyperspectral anomaly detection remains promising but challenging. Previous generative models can accurately learn the complex background distribution of HSI and typically convert the high-dimensional data back to the latent space to extract features to detect anomalies. However, both background modeling and feature-extraction methods can be improved to become ideal in terms of the modeling power and reconstruction consistency capability. In this work, we present a multi-prior-based network (MPN) to incorporate the well-trained GANs as effective priors to a general anomaly-detection task. In particular, we introduce multi-scale covariance maps (MCMs) of precise second-order statistics to construct multi-scale priors. The MCM strategy implicitly bridges the spectral- and spatial-specific information and fully represents multi-scale, enhanced information. Thus, we reliably and adaptively estimate the HSI label to alleviate the problem of insufficient priors. Moreover, the twin least-square loss is imposed to improve the generative ability and training stability in feature and image domains, as well as to overcome the gradient vanishing problem. Last but not least, the network, enforced with a new anomaly rejection loss, establishes a pure and discriminative background estimation.
APA, Harvard, Vancouver, ISO, and other styles
14

Koos, Krisztian, Begüm Peksel, and Lóránd Kelemen. "Phase Measurement Using DIC Microscopy." Acta Cybernetica 23, no. 2 (2017): 629–43. http://dx.doi.org/10.14232/actacyb.23.2.2017.12.

Full text
Abstract:
The development of fluorescent probes and proteins has helped make light microscopy more popular by allowing the visualization of specific subcellular components, location and dynamics of biomolecules. However, it is not always feasible to label the cells as it may be phototoxic or perturb their functionalities. Label-free microscopy techniques allow us to work with live cells without perturbation and to evaluate morphological differences, which in turn can provide useful information for high-throughput assays. In this study, we use one of the most popular label-free techniques called differential interference contrast (DIC) microscopy to estimate the phase of cells and other nearly transparent objects and instantly estimate their height. DIC images provide detailed information about the optical path length (OPL) differences in the sample and they are visually similar to a gradient image. Our previous DIC construction algorithm outputs an image where the values are proportional to the OPL (or implicitly the phase) of the sample. Although the reconstructed images are capable of describing cellular morphology and to a certain extent turn DIC into a quantitative technique, the actual OPL has to be computed from the input DIC image and the microscope calibration settings. Here we propose a computational method to measure the phase and approximate height of cells after microscope calibration, assuming a linear formation model. After a calibration step the phase of further samples can be determined when the refractive indices of the sample and the surrounding medium is known. The precision of the method is demonstrated on reconstructing the thickness of known objects and real cellular samples.
APA, Harvard, Vancouver, ISO, and other styles
15

Studholme, Joshua, and Sergey Gulev. "Concurrent Changes to Hadley Circulation and the Meridional Distribution of Tropical Cyclones." Journal of Climate 31, no. 11 (May 7, 2018): 4367–89. http://dx.doi.org/10.1175/jcli-d-17-0852.1.

Full text
Abstract:
AbstractPoleward trends in seasonal-mean latitudes of tropical cyclones (TCs) have been identified in direct observations from 1980 to the present. Paleoclimate reconstructions also indicate poleward–equatorward migrations over centennial–millennial time scales. Hadley circulation (HC) is often both implicitly and explicitly invoked to provide dynamical linkages to these shifts, although no direct analysis of concurrent changes in the recent period has been presented. Here, the observational TC record (1981–2016) and ERA-Interim, JRA-55, and MERRA-2 are studied to examine potential relationships between the two. A zonally asymmetric HC is defined by employing Helmholtz theory for vector decomposition, and this permits the derivation of novel HC diagnostics local to TC basins.Coherent variations in both long-term linear trends and detrended interannual variability are found. TC genesis and lifetime maximum intensity latitudes share trend sign and magnitude with shifts in local HC extent, with rates being approximately 0.25° ± 0.1° lat decade−1. Both these life cycle stages in hemispheric means and all Pacific TC basins, as well as poleward-extreme North Atlantic lysis latitudes, shared approximately 35% of their interannual variability with HC extent. Local HC intensity is linked only to eastern North Pacific TC latitudes, where strong local overturning corresponds to equatorward TC shifts. Examination of potential dynamical linkages implicates La Niña–like sea surface temperature gradients to poleward HC termini. This corresponds to increased tropical and reduced subtropical vertical wind shear everywhere except in the North Atlantic and western North Pacific, where the opposite is true. These results quantify a long-hypothesized link between TCs and the large-scale oceanic–atmospheric state.
APA, Harvard, Vancouver, ISO, and other styles
16

Jin, Hu, and George A. McMechan. "Fast acoustic velocity tomography of focusing operators." GEOPHYSICS 79, no. 4 (July 1, 2014): R121—R131. http://dx.doi.org/10.1190/geo2013-0389.1.

Full text
Abstract:
A 2D velocity model was estimated by tomographic imaging of overlapping focusing operators that contain one-way traveltimes, from common-focus points to receivers in an aperture along the earth’s surface. The stability and efficiency of convergence and the quality of the resulting models were improved by a sequence of ideas. We used a hybrid parameterization that has an underlying grid, upon which is superimposed a flexible, pseudolayer model. We first solved for the low-wavenumber parts of the model (approximating it as constant-velocity pseudo layers), then we allowed intermediate wavenumbers (allowing the layers to have linear velocity gradients), and finally did unconstrained iterations to add the highest wavenumber details. Layer boundaries were implicitly defined by focus points that align along virtual marker (reflector) horizons. Each focus point sampled an area bounded by the first and last rays in the data aperture at the surface; this reduced the amount of computation and the size of the effective null space of the solution. Model updates were performed simultaneously for the velocities and the local focus point positions in two steps; local estimates were performed independently by amplitude semblance for each focusing operator within its area of dependence, followed by a tomographic weighting of the local estimates into a global solution for each grid point, subject to the constraints of the parameterization used at that iteration. The system of tomographic equations was solved by simultaneous iterative reconstruction, which is equivalent to a least-squares solution, but it does not involve a matrix inversion. The algorithm was successfully applied to synthetic data for a salt dome model using a constant-velocity starting model; after a total of 25 iterations, the velocity error was [Formula: see text] and the final mean focal point position error was [Formula: see text] wavelength.
APA, Harvard, Vancouver, ISO, and other styles
17

Karkalousos, Dimitrios, Samantha Noteboom, Hanneke E. Hulst, Frans M. Vos, and Matthan W. A. Caan. "Assessment of data consistency through cascades of independently recurrent inference machines for fast and robust accelerated MRI reconstruction." Physics in Medicine & Biology, May 4, 2022. http://dx.doi.org/10.1088/1361-6560/ac6cc2.

Full text
Abstract:
Abstract Objective: Machine Learning methods can learn how to reconstruct Magnetic Resonance Images (MRI) and thereby accelerate acquisition, which is of paramount importance to the clinical workflow. Physics-informed networks incorporate the forward model of accelerated MRI reconstruction in the learning process. With increasing network complexity, robustness is not ensured when reconstructing data unseen during training. We aim to embed data consistency (DC) in deep networks while balancing the degree of network complexity. While doing so, we will assess whether either explicit or implicit enforcement of DC in varying network architectures is preferred to optimize performance. Approach: We propose a scheme called Cascades of Independently Recurrent Inference Machines (CIRIM) to assess DC through unrolled optimization. Herein we assess DC both implicitly by gradient descent and explicitly by a designed term. Extensive comparison of the CIRIM to Compressed Sensing as well as other Machine Learning methods is performed: the End-to-End Variational Network (E2EVN), CascadeNet, KIKINet, LPDNet, RIM, IRIM, and UNet. Models were trained and evaluated on T1-weighted and FLAIR contrast brain data, and T2-weighted knee data. Both 1D and 2D undersampling patterns were evaluated. Robustness was tested by reconstructing 7.5x prospectively undersampled 3D FLAIR MRI data of Multiple Sclerosis (MS) patients with white matter lesions. Main results: The CIRIM performed best when implicitly enforcing DC, while the E2EVN required an explicit DC formulation. Through its cascades, the CIRIM was able to score higher on Structural Similarity and PSNR compared to other methods, in particular under heterogeneous imaging conditions. In reconstructing MS patient data, prospectively acquired with a sampling pattern unseen during model training, the CIRIM maintained lesion contrast while efficiently denoising the images. Significance: The CIRIM showed highly promising generalization capabilities maintaining a very fair trade-off between reconstructed image quality and fast reconstruction times, which is crucial in the clinical workflow.
APA, Harvard, Vancouver, ISO, and other styles
18

Torres, Kristian, and Mauricio Sacchi. "Least-squares reverse time migration via deep learning-based updating operators." GEOPHYSICS, August 4, 2022, 1–80. http://dx.doi.org/10.1190/geo2021-0491.1.

Full text
Abstract:
Two common issues of least-squares reverse time migration (LSRTM) consist of the many iterations required to produce substantial subsurface imaging improvements and the difficulty of choosing adequate regularization strategies with optimal hyperparameters. We investigate how supervised learning can mitigate these shortcomings by solving the LSRTM problem through an iterative deep learning framework inspired by the projected gradient descent algorithm. In particular, we develop an image-to-image approach interlacing the gradient steps at each iteration with blocks of residual convolutional neural networks (CNN) that capture the prior information in the training phase. By including the least-squares data-misfit gradient into the learning process, we force the solution to comply with the observed seismic data, while the CNN projections implicitly account for the regularization effects that lead to high-resolution reflectivity updates. After training with nine hundred randomly generated instances, our network ensemble can estimate accurate reflectivity distributions in only a few iterations. To demonstrate the effectiveness and generalization properties of the method, we consider three synthetic cases: a folded and faulted model, the Marmousi model, and the Sigsbee2a model. We empirically show that it is possible to obtain an improved reflectivity model for out-of-distribution instances by using the learned reconstructions as warm-starts for the conjugate gradient algorithm, bridging the gap between learned and conventional LSRTM schemes. Finally, we apply the proposed network with transfer learning on a 2D towed-streamer Gulf of Mexico field data set, producing high-resolution images comparable to traditional LSRTM but drastically reducing the required number of iterations.
APA, Harvard, Vancouver, ISO, and other styles
19

Dobramysl, Ulrich, and David Holcman. "Computational methods and diffusion theory in triangulation sensing to model neuronal navigation." Reports on Progress in Physics, September 8, 2022. http://dx.doi.org/10.1088/1361-6633/ac906b.

Full text
Abstract:
Abstract Computational methods are now recognized as powerful and complementary approaches in various applied sciences such as biology. These computing methods are used to explore the gap between scales such as the one between molecular and cellular. Here we present recent progress in the development of computational approaches involving diffusion modeling, asymptotic analysis of the model partial differential equations, hybrid methods and simulations in the generic context of cell sensing and guidance via external gradients. Specifically, we highlight the reconstruction of the location of a point source in two and three dimensions from the steady-state diffusion fluxes arriving to narrow windows located on the cell. We discuss cases in which these windows are located on the boundary of a two-dimensional plane or three-dimensional half-space, on a disk in free space or inside a two-dimensional corridor, or a ball in three dimensions. The basis of this computational approach is explicit solutions of the Neumann-Green's function for the mentioned geometry. This analysis can be used to design hybrid simulations where Brownian paths are generated only in small regions in which the local spatial organization is relevant. Particle trajectories outside of this region are only implicitly treated by generating exit points at the boundary of this domain of interest. This greatly accelerates the simulation time by avoiding the explicit computation of Brownian paths in an infinite domain and serves to generate statistics, without following all trajectories at the same time, a process that can become numerically expensive quickly. Moreover, these computational approaches are used to reconstruct a point source and estimating the uncertainty in the source reconstruction due to an additive noise perturbation present in the fluxes. We also discuss the influence of various window configurations (cluster vs uniform distributions) on recovering the source position. Finally, the applications in developmental biology are formulated into computational principles that could underly neuronal navigation in the brain.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography