To see the other types of publications on this topic, follow the link: Interface regularization.

Journal articles on the topic 'Interface regularization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Interface regularization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Liu, Ji-Chuan. "Shape Reconstruction of Conductivity Interface Problems." International Journal of Computational Methods 16, no. 01 (November 21, 2018): 1850092. http://dx.doi.org/10.1142/s0219876218500925.

Full text
Abstract:
In this paper, we consider a conductivity interface problem to recover the salient features of the inclusion within a body from noisy observation data on the boundary. Based on integral equations, we propose iterative algorithms to detect the location, the size and the shape of the conductivity inclusion. This problem is severely ill-posed and nonlinear, thus we should consider regularization techniques in order to improve the corresponding approximation. We give several examples to show the viability of our proposed reconstruction algorithms.
APA, Harvard, Vancouver, ISO, and other styles
2

POP, IULIU SORIN, and BEN SCHWEIZER. "REGULARIZATION SCHEMES FOR DEGENERATE RICHARDS EQUATIONS AND OUTFLOW CONDITIONS." Mathematical Models and Methods in Applied Sciences 21, no. 08 (August 2011): 1685–712. http://dx.doi.org/10.1142/s0218202511005532.

Full text
Abstract:
We analyze regularization schemes for the Richards equation and a time discrete numerical approximation. The original equations can be doubly degenerate, therefore they may exhibit fast and slow diffusion. In addition, we treat outflow conditions that model an interface separating the porous medium from a free flow domain. In both situations we provide a regularization with a non-degenerate equation and standard boundary conditions, and discuss the convergence rates of the approximations.
APA, Harvard, Vancouver, ISO, and other styles
3

Conde Mones, José Julio, Emmanuel Roberto Estrada Aguayo, José Jacobo Oliveros Oliveros, Carlos Arturo Hernández Gracidas, and María Monserrat Morín Castillo. "Stable Identification of Sources Located on Interface of Nonhomogeneous Media." Mathematics 9, no. 16 (August 13, 2021): 1932. http://dx.doi.org/10.3390/math9161932.

Full text
Abstract:
This paper presents a stable method for the identification of sources located on the separation interface of two homogeneous media (where one of them is contained by the other one), from measurement yielded by those sources on the exterior boundary of the media. This is an ill-posed problem because numerical instability is presented, i.e., minimal errors in the measurement can result in significant changes in the solution. To obtain the proposed stable method the identification problem is categorized into three subproblems, two of which present numerical instability and regularization methods must be applied to obtain their solution in a stable form. To manage the numerical instability due to the ill-posedness of these subproblems, the Tikhonov regularization and sequential smoothing methods are used. We illustrate this methodology in a circular and irregular region to demonstrate the feasibility of the proposed method, which yields convergent and stable solutions for input data with and without noise.
APA, Harvard, Vancouver, ISO, and other styles
4

Du, Jian, Robert D. Guy, Aaron L. Fogelson, Grady B. Wright, and James P. Keener. "An Interface-Capturing Regularization Method for Solving the Equations for Two-Fluid Mixtures." Communications in Computational Physics 14, no. 5 (November 2013): 1322–46. http://dx.doi.org/10.4208/cicp.180512.210313a.

Full text
Abstract:
AbstractMany problems in biology involve gels which are mixtures composed of a polymer network permeated by a fluid solvent (water). The two-fluid model is a widely used approach to described gel mechanics, in which both network and solvent coexist at each point of space and their relative abundance is described by their volume fractions. Each phase is modeled as a continuum with its own velocity and constitutive law. In some biological applications, free boundaries separate regions of gel and regions of pure solvent, resulting in a degenerate network momentum equation where the network volume fraction vanishes. To overcome this difficulty, we develop a regularization method to solve the two-phase gel equations when the volume fraction of one phase goes to zero in part of the computational domain. A small and constant network volume fraction is temporarily added throughout the domain in setting up the discrete linear equations and the same set of equation is solved everywhere. These equations are very poorly conditioned for small values of the regularization parameter, but the multigrid-preconditioned GMRES method we use to solve them is efficient and produces an accurate solution of these equations for the full range of relevant regularization parameter values.
APA, Harvard, Vancouver, ISO, and other styles
5

Torabi, Solmaz, John Lowengrub, Axel Voigt, and Steven Wise. "A new phase-field model for strongly anisotropic systems." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 465, no. 2105 (January 13, 2009): 1337–59. http://dx.doi.org/10.1098/rspa.2008.0385.

Full text
Abstract:
We present a new phase-field model for strongly anisotropic crystal and epitaxial growth using regularized, anisotropic Cahn–Hilliard-type equations. Such problems arise during the growth and coarsening of thin films. When the anisotropic surface energy is sufficiently strong, sharp corners form and unregularized anisotropic Cahn–Hilliard equations become ill-posed. Our models contain a high-order Willmore regularization, where the square of the mean curvature is added to the energy, to remove the ill-posedness. The regularized equations are sixth order in space. A key feature of our approach is the development of a new formulation in which the interface thickness is independent of crystallographic orientation. Using the method of matched asymptotic expansions, we show the convergence of our phase-field model to the general sharp-interface model. We present two- and three-dimensional numerical results using an adaptive, nonlinear multigrid finite-difference method. We find excellent agreement between the dynamics of the new phase-field model and the sharp-interface model. The computed equilibrium shapes using the new model also match a recently developed analytical sharp-interface theory that describes the rounding of the sharp corners by the Willmore regularization.
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Ruohan, and Chunping Ren. "A Novel Combined Regularization for Identifying Random Load Sources on Coal-Rock Structure." Mathematical Problems in Engineering 2023 (April 14, 2023): 1–11. http://dx.doi.org/10.1155/2023/5883003.

Full text
Abstract:
This study discusses the practical engineering problem of determining random load sources on coal-rock structures. A novel combined regularization technique combining mollification method (MM) and discrete regularization (DR), which was called MM-DR technique, was proposed to reconstruct random load sources on coal-rock structures. MM-DR technology is compared with DR, Tikhonov regularization (TR), and maximum entropy regularization (MER) in load reconstruction. The results show that the reconstructed random load sources are more consistent with the real load sources using MM-DR technique combined with particle swarm optimization (PSO) and L-curve method, which was named as PSO-L method, and selecting optimal ω value of kernel function is beneficial to overcome the ill-posed of random load sources reconstruction and to obtain the stable and approximate solutions. The method proposed in this study provides a theoretical basis for the recognition of coal-rock interface.
APA, Harvard, Vancouver, ISO, and other styles
7

Loghin, Daniel. "Preconditioned Dirichlet-Dirichlet Methods for Optimal Control of Elliptic PDE." Analele Universitatii "Ovidius" Constanta - Seria Matematica 26, no. 2 (July 1, 2018): 175–92. http://dx.doi.org/10.2478/auom-2018-0024.

Full text
Abstract:
Abstract The discretization of optimal control of elliptic partial differential equations problems yields optimality conditions in the form of large sparse linear systems with block structure. Correspondingly, when the solution method is a Dirichlet-Dirichlet non-overlapping domain decomposition method, we need to solve interface problems which inherit the block structure. It is therefore natural to consider block preconditioners acting on the interface variables for the acceleration of Krylov methods with substructuring preconditioners. In this paper we describe a generic technique which employs a preconditioner block structure based on the fractional Sobolev norms corresponding to the domains of the boundary operators arising in the matrix interface problem, some of which may include a dependence on the control regularization parameter. We illustrate our approach on standard linear elliptic control problems. We present analysis which shows that the resulting iterative method converges independently of the size of the problem. We include numerical results which indicate that performance is also independent of the control regularization parameter and exhibits only a mild dependence on the number of the subdomains.
APA, Harvard, Vancouver, ISO, and other styles
8

Bosch, Miguel, Penny Barton, Satish C. Singh, and Immo Trinks. "Inversion of traveltime data under a statistical model for seismic velocities and layer interfaces." GEOPHYSICS 70, no. 4 (July 2005): R33—R43. http://dx.doi.org/10.1190/1.1993712.

Full text
Abstract:
We invert large-aperture seismic reflection and refraction data from a geologically complex area on the northeast Atlantic margin to jointly estimate seismic velocities and depths of major interfaces. Our approach combines this geophysical data information with prior information on seismic compressional velocities and the structural interpretation of seismic sections. We constrain expected seismic velocities in the prior model using information from well logs from a nearby area. The layered structure and prior positions of the interfaces follow information from the seismic section obtained by processing the short offsets. Instead of using a conventional regularization technique to smooth the interface-velocity model, we describe the spatial correlation of interfaces and velocities with a geostatistical model, using a multivariate Gaussian probability density function. We impose a covariance function on the velocity field in each layer and on each interface in the model to control the smoothness of the solution. The inversion is performed by minimizing an objective function with two terms, one term measuring traveltime residuals and the other measuring the fit to the statistical model. We calculate the posterior uncertainties and evaluate the relative influence of data and the prior model on estimated interface depths and seismic velocities. The method results in the estimation of velocity and interface geometry beneath a basaltic sill system down to 7 km depth. This method aims to enhance the interpretation process by combining multidisciplinary information in a quantitative model-based approach.
APA, Harvard, Vancouver, ISO, and other styles
9

Chaboche, J. L., F. Feyel, and Y. Monerie. "Interface debonding models: a viscous regularization with a limited rate dependency." International Journal of Solids and Structures 38, no. 18 (May 2001): 3127–60. http://dx.doi.org/10.1016/s0020-7683(00)00053-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rani, Pooja, Rajneesh Kumar, and Anurag Jain. "An Intelligent System for Heart Disease Diagnosis using Regularized Deep Neural Network." Journal of Applied Research and Technology 21, no. 1 (February 27, 2023): 87–97. http://dx.doi.org/10.22201/icat.24486736e.2023.21.1.1544.

Full text
Abstract:
Heart disease is one of the deadly diseases. Timely detection of the disease can prevent mortality. In this paper, an intelligent system is proposed for the diagnosis of heart disease using clinical parameters at early stages. The system is developed using the Regularized Deep Neural Network model (Reg-DNN). Cleveland heart disease dataset has been used for training the model. Regularization has been achieved by using dropout and L2 regularization. Efficiency of Reg-DNN was evaluated by using hold-out validation method.70% data was used for training the model and 30% data was used for testing the model. Results indicate that Reg-DNN provided better performance than conventional DNN. Regularization has helped to overcome overfitting. Reg-DNN has achieved an accuracy of 94.79%. Results achieved are quite promising as compared to existing systems in the literature. Authors developed a system containing a graphical user interface. So, the system can be easily used by anyone to diagnose heart disease using the clinical parameters.
APA, Harvard, Vancouver, ISO, and other styles
11

Wheeler, A. A. "Phase-field theory of edges in an anisotropic crystal." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 462, no. 2075 (May 30, 2006): 3363–84. http://dx.doi.org/10.1098/rspa.2006.1721.

Full text
Abstract:
In the presence of sufficiently strong surface energy anisotropy, the equilibrium shape of an isothermal crystal may include corners or edges. Models of edges have, to date, involved the regularization of the corresponding free-boundary problem resulting in equilibrium shapes with smoothed out edges. In this paper, we take a new approach and consider how a phase-field model, which provides a diffuse description of an interface, can be extended to the consideration of edges by an appropriate regularization of the underlying mathematical model. Using the method of matched asymptotic expansions, we develop an approximate solution which corresponds to a smoothed out edge from which we are able to determine the associated edge energy.
APA, Harvard, Vancouver, ISO, and other styles
12

Wautelet, Gaetan, Luc Papeleux, and Jean Philippe Ponthot. "The Influence of Equivalent Contact Area Computation in 3D Extended Node to Surface Contact Elements." Key Engineering Materials 681 (February 2016): 19–46. http://dx.doi.org/10.4028/www.scientific.net/kem.681.19.

Full text
Abstract:
This paper extends the frictionless penalty-based node to contact formulation with area regularization to a 3D framework. Based on our previous work [1] focused on axisymmetric modeling, two computational methods are also considered for the determination of the slave node area. The first method, named as the geometrical approach, is based on a force equivalence system, while the second one, named as the consistent approach, is derived from a more sophisticated scheme elaborated upon the virtual work principle. Then, the extended contact elements are derived for the contact formulations with geometrical and consistent area regularization and a consistent linearization is provided accordingly, which guarantees a quadratic rate of convergence of the global Newton Raphson iterative procedure. Finally, two numerical examples assess the performance of both contact formulations with area regularization and demonstrates the robustness and the efficiency of the node to surface contact formulation with consistent area regularization in reproducing a constant contact pressure distribution across the interface between a deformable body and a analytically-defined rigid body, irrespective of the mesh. Our findings will certainly encourage further developments towards the design of a penaltybased node to surface contact algorithm passing the contact patch test, as was already done successfully in 2D contact problems [2].
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Yihao, and Jie Zhang. "Joint refraction traveltime tomography and migration for multilayer near-surface imaging." GEOPHYSICS 84, no. 6 (November 1, 2019): U31—U43. http://dx.doi.org/10.1190/geo2018-0737.1.

Full text
Abstract:
In near-surface velocity structure estimation, first-arrival traveltime tomography tends to produce a smooth velocity model. If the shallow structures include a weathering layer over high-velocity bedrock, first-arrival traveltime tomography may fail to recover the sharp interface. However, with the same traveltime data, refraction traveltime migration proves to be an effective tool for accurately mapping the refractor. The approach downward continues the refraction traveltime curves and produces an image (position) of the refractor for a given overburden velocity model. We first assess the validity of the refraction traveltime migration method and analyze its uncertainties with a simple model. We then develop a multilayer refraction traveltime migration method and apply the migration image to constrain traveltime tomographic inversion by imposing discontinuities at the refraction interfaces in model regularization. In each subsequent iteration, the shape of the migrated refractors and the velocity model are simultaneously updated. The synthetic tests indicate that the joint inversion method performs better than the conventional first-arrival traveltime tomography method with Tikhonov regularization and the delay-time method in reconstructing near-surface models with high-velocity contrasts. In application to field data, this method produces a more accurately resolved velocity model, which improves the quality of common midpoint stacking by making long-wavelength static corrections.
APA, Harvard, Vancouver, ISO, and other styles
14

Dai, Qianwei, Hao Zhang, and Bin Zhang. "An Improved Particle Swarm Optimization Based on Total Variation Regularization and Projection Constraint with Applications in Ground-Penetrating Radar Inversion: A Model Simulation Study." Remote Sensing 13, no. 13 (June 27, 2021): 2514. http://dx.doi.org/10.3390/rs13132514.

Full text
Abstract:
The chaos oscillation particle swarm optimization (COPSO) algorithm is prone to binge trapped in the local optima when dealing with certain complex models in ground-penetrating radar (GPR) data inversion, because it inherently suffers from premature convergence, high computational costs, and extremely slow convergence times, especially in the middle and later periods of iterative inversion. Considering that the bilateral connections between different particle positions can improve both the algorithmic searching efficiency and the convergence performance, we first develop a fast single-trace-based approach to construct an initial model for 2-D PSO inversion and then propose a TV-regularization-based improved PSO (TVIPSO) algorithm that employs total variation (TV) regularization as a constraint technique to adaptively update the positions of particles. B by adding the new velocity variations and optimal step size matrices, the search range of the random particles in the solution space can be significantly reduced, meaning blindness in the search process can be avoided. By introducing constraint-oriented regularization to allow the optimization search to move out of the inaccurate region, the premature convergence and blurring problems can be mitigated to further guarantee the inversion accuracy and efficiency. We report on three inversion experiments involving multilayered, fluctuated terrain models and a typical complicated inner-interface model to demonstrate the performance of the proposed algorithm. The results of the fluctuated terrain model show that compared with the COPSO algorithm, the fitness error (MAE) of the TVIPSO algorithm is reduced from 2.3715 to 1.0921, while for the complicated inner-interface model the fitness error (MARE) of the TVIPSO algorithm is reduced from 1.9539 to 1.5674.
APA, Harvard, Vancouver, ISO, and other styles
15

Kuilekov, M., M. Ziolkowski, and H. Brauer. "Regularization technique applied to the reconstruction of the interface between two conducting fluids." International Journal of Applied Electromagnetics and Mechanics 26, no. 3-4 (August 30, 2007): 257–64. http://dx.doi.org/10.3233/jae-2007-916.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Meyer, C., and I. Yousept. "Regularization of state-constrained elliptic optimal control problems with nonlocal radiation interface conditions." Computational Optimization and Applications 44, no. 2 (December 4, 2007): 183–212. http://dx.doi.org/10.1007/s10589-007-9151-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Shao, Yuanzhen, Mark McGowan, Siwen Wang, Emil Alexov, and Shan Zhao. "Convergence of a diffuse interface Poisson-Boltzmann (PB) model to the sharp interface PB model: A unified regularization formulation." Applied Mathematics and Computation 436 (January 2023): 127501. http://dx.doi.org/10.1016/j.amc.2022.127501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Holm, Darryl D., Lennon Ó. Náraigh, and Cesare Tronci. "A geometric diffuse-interface method for droplet spreading." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 476, no. 2233 (January 2020): 20190222. http://dx.doi.org/10.1098/rspa.2019.0222.

Full text
Abstract:
This paper exploits the theory of geometric gradient flows to introduce an alternative regularization of the thin-film equation valid in the case of large-scale droplet spreading—the geometric diffuse-interface method. The method possesses some advantages when compared with the existing models of droplet spreading, namely the slip model, the precursor-film method and the diffuse-interface model. These advantages are discussed and a case is made for using the geometric diffuse-interface method for the purpose of numerical simulations. The mathematical solutions of the geometric diffuse interface method are explored via such numerical simulations for the simple and well-studied case of large-scale droplet spreading for a perfectly wetting fluid—we demonstrate that the new method reproduces Tanner’s Law of droplet spreading via a simple and robust computational method, at a low computational cost. We discuss potential avenues for extending the method beyond the simple case of perfectly wetting fluids.
APA, Harvard, Vancouver, ISO, and other styles
19

HEINEMANN, CHRISTIAN, and CHRISTIANE KRAUS. "Existence results for diffuse interface models describing phase separation and damage." European Journal of Applied Mathematics 24, no. 2 (November 9, 2012): 179–211. http://dx.doi.org/10.1017/s095679251200037x.

Full text
Abstract:
In this paper, we analytically investigate multi-component Cahn–Hilliard and Allen–Cahn systems which are coupled with elasticity and uni-directional damage processes. The free energy of the system is of the form ∫Ω½Γ∇c : ∇c + ½|∇z|2+Wch(c)+Wel(e,c,z)dx with a polynomial or logarithmic chemical energy density Wch, an inhomogeneous elastic energy density Wel and a quadratic structure of the gradient of damage variable z. For the corresponding elastic Cahn–Hilliard and Allen–Cahn systems coupled with uni-directional damage processes, we present an appropriate notion of weak solutions and prove existence results based on certain regularization methods and a higher integrability result for strain e.
APA, Harvard, Vancouver, ISO, and other styles
20

Agosti, A., P. Colli, H. Garcke, and E. Rocca. "A Cahn–Hilliard phase field model coupled to an Allen–Cahn model of viscoelasticity at large strains." Nonlinearity 36, no. 12 (October 30, 2023): 6589–638. http://dx.doi.org/10.1088/1361-6544/ad0211.

Full text
Abstract:
Abstract We propose a new Cahn–Hilliard phase field model coupled to incompressible viscoelasticity at large strains, obtained from a diffuse interface mixture model and formulated in the Eulerian configuration. A new kind of diffusive regularization, of Allen–Cahn type, is introduced in the transport equation for the deformation gradient, together with a regularizing interface term depending on the gradient of the deformation gradient in the free energy density of the system. The designed regularization preserves the dissipative structure of the equations. We obtain the global existence of a weak solution in three space dimensions and for generic nonlinear elastic energy densities with polynomial growth, comprising the relevant cases of polyconvex Mooney–Rivlin and Ogden elastic energies. Also, our analysis considers elastic free energy densities which depend on the phase field variable and which can possibly degenerate for some values of the phase field variable. We also propose two kinds of unconditionally energy stable finite element approximations of the model, based on convex splitting ideas and on the use of a scalar auxiliary variable respectively, proving the existence and stability of discrete solutions. We finally report numerical results for different test cases with shape memory alloy type free energy, showing the interplay between phase separation and finite elasticity in determining the topology of stationary states with pure phases characterized by different elastic properties.
APA, Harvard, Vancouver, ISO, and other styles
21

Liu, Chunsheng, and Chunping Ren. "A Novel Improved Maximum Entropy Regularization Technique and Application to Identification of Dynamic Loads on the Coal Rock." Journal of Electrical and Computer Engineering 2019 (January 20, 2019): 1–12. http://dx.doi.org/10.1155/2019/9602954.

Full text
Abstract:
A new signal processing algorithm was proposed to identify the dynamic load acting on the coal-rock structure. First, the identification model for dynamic load is established through the relationship between the uncertain load vector, and the assembly matrix of the responses was measured by the machinery dynamic system. Then, the entropy item of maximum entropy regularization (MER) is redesigned using the robust estimation method, and the elongated penalty function according to the ill-posedness characteristics of load identification, which was named as a novel improved maximum entropy regularization (IMER) technique, was proposed to process the dynamic load signals. Finally, the load identification problem is transformed into an unconstrained optimization problem and an improved Newton iteration algorithm was proposed to solve the objective function. The result of IMER technique is compared with MER technique, and it is found that IMER technique is available for analyzing the dynamic load signals due to higher signal-noise ratio, lower restoration time, and fewer iterative steps. Experiments were performed to investigate the effect on the performance of dynamic load signals identification by different regularization parameters and calculation parameters, pi, respectively. Experimental results show that the identified dynamic load signals are closed to the actual load signals using IMER technique combined with the proposed PSO-L regularization parameter selection method. Selecting optimal calculated parameters pi is helpful to overcome the ill-condition of dynamic load signals identification and to obtain the stable and approximate solutions of inverse problems in practical engineering. Meanwhile, the proposed IMER technique can also play a guiding role for the coal-rock interface identification.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhao, Zhi Long, Rong Zhang, Lin Liu, and Ampere A. Tseng. "Unidirectionally Solidified Al-Cu Eutectic Alloy under High Intensity Pulsed Magnetic Field." Materials Science Forum 475-479 (January 2005): 2619–22. http://dx.doi.org/10.4028/www.scientific.net/msf.475-479.2619.

Full text
Abstract:
The pulsed electric current produced by discharging of capacitors pass through the double solenoids with ferrosilicon core, the instant strong pulsed magnetic field with the oscillating and declining characters was made between two ferrosilicon cores. The effect of high-pulsed magnetic field on the unidirectionally solidified Al-Cu eutectic microstructure with 10µm/s withdrawn velocity was investigated. Under the high intensive pulsed magnetic field, the Al-Cu eutectic solidified morphology experience three evolution stages that are regular columnar structure to breaking off fine grain, coarsening dendrite to newly regularization columnar structure with increasing of 0~15.5J charge energy. It is found that rich copper phase evidently come out inter eutectic cells and the eutectic spacing decrease in newly regularization specimen. The induced electric field caused by high pulsed magnetic field in metallic melt brought into oscillating solute electro-migration in front of solidification interface, which has the effect of promoting solute diffusion and reducing the constitutionally supercooling region.
APA, Harvard, Vancouver, ISO, and other styles
23

Cobelli, Pablo J., Philippe Petitjeans, Agnès Maurel, and Vincent Pagneux. "Determination of the bottom deformation from space- and time-resolved water wave measurements." Journal of Fluid Mechanics 835 (November 27, 2017): 301–26. http://dx.doi.org/10.1017/jfm.2017.741.

Full text
Abstract:
In this paper we study both theoretically and experimentally the inverse problem of indirectly measuring the shape of a localized bottom deformation with a non-instantaneous time evolution, from either an instantaneous global state (space-based inversion) or a local time-history record (time-based inversion) of the free-surface evolution. Firstly, the mathematical inversion problem is explicitly defined and uniqueness of its solution is established. We then show that this problem is ill-posed in the sense of Hadamard, rendering its solution unstable. In order to overcome this difficulty, we introduce a regularization scheme as well as a strategy for choosing the optimal value of the associated regularization parameter. We then conduct a series of laboratory experiments in which an axisymmetric three-dimensional bottom deformation of controlled shape and time evolution is imposed on a layer of water of constant depth, initially at rest. The detailed evolution of the air–liquid interface is measured by means of a free-surface profilometry technique providing space- and time-resolved data. Based on these experimental data and employing our regularization scheme, we are able to show that it is indeed possible to reconstruct the seabed profile responsible for the linear free-surface dynamics either by space- or time-based inversions. Furthermore, we discuss the different relative advantages of each type of reconstruction, their associated errors and the limitations of the inverse determination.
APA, Harvard, Vancouver, ISO, and other styles
24

Li, Zhilin, and Ming-Chih Lai. "New Finite Difference Methods Based on IIM for Inextensible Interfaces in Incompressible Flows." East Asian Journal on Applied Mathematics 1, no. 2 (May 2011): 155–71. http://dx.doi.org/10.4208/eajam.030510.250910a.

Full text
Abstract:
AbstractIn this paper, new finite difference methods based on the augmented immersed interface method (IIM) are proposed for simulating an inextensible moving interface in an incompressible two-dimensional flow. The mathematical models arise from studying the deformation of red blood cells in mathematical biology. The governing equations are incompressible Stokes or Navier-Stokes equations with an unknown surface tension, which should be determined in such a way that the surface divergence of the velocity is zero along the interface. Thus, the area enclosed by the interface and the total length of the interface should be conserved during the evolution process. Because of the nonlinear and coupling nature of the problem, direct discretization by applying the immersed boundary or immersed interface method yields complex nonlinear systems to be solved. In our new methods, we treat the unknown surface tension as an augmented variable so that the augmented IIM can be applied. Since finding the unknown surface tension is essentially an inverse problem that is sensitive to perturbations, our regularization strategy is to introduce a controlled tangential force along the interface, which leads to a least squares problem. For Stokes equations, the forward solver at one time level involves solving three Poisson equations with an interface. For Navier-Stokes equations, we propose a modified projection method that can enforce the pressure jump condition corresponding directly to the unknown surface tension. Several numerical experiments show good agreement with other results in the literature and reveal some interesting phenomena.
APA, Harvard, Vancouver, ISO, and other styles
25

Nikol’skii, D. N. "Regularization of a discrete scheme governing the three-dimensional evolution of a fluid-fluid interface." Computational Mathematics and Mathematical Physics 50, no. 3 (March 2010): 531–36. http://dx.doi.org/10.1134/s0965542510030140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Geng, Weihua, and Shan Zhao. "A two-component Matched Interface and Boundary (MIB) regularization for charge singularity in implicit solvation." Journal of Computational Physics 351 (December 2017): 25–39. http://dx.doi.org/10.1016/j.jcp.2017.09.026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Xie, Xuming, and Sahar Almashaan. "Exact solutions to interfacial flows with kinetic undercooling in a Hele-Shaw cell of time-dependent gap." Malaya Journal of Matematik 11, S (October 1, 2023): 27–42. http://dx.doi.org/10.26637/mjm11s/002.

Full text
Abstract:
Hele-Shaw cells where the top plate is moving uniformly at a prescribed speed and the bottom plate is fixed have been used to study interface related problems. This paper focuses on interfacial flows with linear and nonlinear kinetic undercooling regularization in a radial Hele-Shaw cell with a time dependent gap. We obtain some exact solutions of the moving boundary problems when the initial shape is a circle, an ellipse or an annular domain. For the nonlinear case, a linear stability analysis is also presented for the circular solutions. The methodology is to use complex analysis and PDE theory.
APA, Harvard, Vancouver, ISO, and other styles
28

Wu, Qiang, Yu Zhang, Ju Liu, Jiande Sun, Andrzej Cichocki, and Feng Gao. "Regularized Group Sparse Discriminant Analysis for P300-Based Brain–Computer Interface." International Journal of Neural Systems 29, no. 06 (July 29, 2019): 1950002. http://dx.doi.org/10.1142/s0129065719500023.

Full text
Abstract:
Event-related potentials (ERPs) especially P300 are popular effective features for brain–computer interface (BCI) systems based on electroencephalography (EEG). Traditional ERP-based BCI systems may perform poorly for small training samples, i.e. the undersampling problem. In this study, the ERP classification problem was investigated, in particular, the ERP classification in the high-dimensional setting with the number of features larger than the number of samples was studied. A flexible group sparse discriminative analysis algorithm based on Moreau–Yosida regularization was proposed for alleviating the undersampling problem. An optimization problem with the group sparse criterion was presented, and the optimal solution was proposed by using the regularized optimal scoring method. During the alternating iteration procedure, the feature selection and classification were performed simultaneously. Two P300-based BCI datasets were used to evaluate our proposed new method and compare it with existing standard methods. The experimental results indicated that the features extracted via our proposed method are efficient and provide an overall better P300 classification accuracy compared with several state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
29

Hamitouche, L., M. Tarfaoui, and A. Vautrin. "An interface debonding law subject to viscous regularization for avoiding instability: Application to the delamination problems." Engineering Fracture Mechanics 75, no. 10 (July 2008): 3084–100. http://dx.doi.org/10.1016/j.engfracmech.2007.12.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Petters, Markus D. "Revisiting matrix-based inversion of scanning mobility particle sizer (SMPS) and humidified tandem differential mobility analyzer (HTDMA) data." Atmospheric Measurement Techniques 14, no. 12 (December 21, 2021): 7909–28. http://dx.doi.org/10.5194/amt-14-7909-2021.

Full text
Abstract:
Abstract. Tikhonov regularization is a tool for reducing noise amplification during data inversion. This work introduces RegularizationTools.jl, a general-purpose software package for applying Tikhonov regularization to data. The package implements well-established numerical algorithms and is suitable for systems of up to ∼ 1000 equations. Included is an abstraction to systematically categorize specific inversion configurations and their associated hyperparameters. A generic interface translates arbitrary linear forward models defined by a computer function into the corresponding design matrix. This obviates the need to explicitly write out and discretize the Fredholm integral equation, thus facilitating fast prototyping of new regularization schemes associated with measurement techniques. Example applications include the inversion involving data from scanning mobility particle sizers (SMPSs) and humidified tandem differential mobility analyzers (HTDMAs). Inversion of SMPS size distributions reported in this work builds upon the freely available software DifferentialMobilityAnalyzers.jl. The speed of inversion is improved by a factor of ∼ 200, now requiring between 2 and 5 ms per SMPS scan when using 120 size bins. Previously reported occasional failure to converge to a valid solution is reduced by switching from the L-curve method to generalized cross-validation as the metric to search for the optimal regularization parameter. Higher-order inversions resulting in smooth, denoised reconstructions of size distributions are now included in DifferentialMobilityAnalyzers.jl. This work also demonstrates that an SMPS-style matrix-based inversion can be applied to find the growth factor frequency distribution from raw HTDMA data while also accounting for multiply charged particles. The outcome of the aerosol-related inversion methods is showcased by inverting multi-week SMPS and HTDMA datasets from ground-based observations, including SMPS data obtained at Bodega Marine Laboratory during the CalWater 2/ACAPEX campaign and co-located SMPS and HTDMA data collected at the US Department of Energy observatory located at the Southern Great Plains site in Oklahoma, USA. Results show that the proposed approaches are suitable for unsupervised, nonparametric inversion of large-scale datasets as well as inversion in real time during data acquisition on low-cost reduced-instruction-set architectures used in single-board computers. The included software implementation of Tikhonov regularization is freely available, general, and domain-independent and thus can be applied to many other inverse problems arising in atmospheric measurement techniques and beyond.
APA, Harvard, Vancouver, ISO, and other styles
31

Tago, J., V. M. Cruz-Atienza, C. Villafuerte, T. Nishimura, V. Kostoglodov, J. Real, and Y. Ito. "Adjoint slip inversion under a constrained optimization framework: revisiting the 2006 Guerrero slow slip event." Geophysical Journal International 226, no. 2 (April 22, 2021): 1187–205. http://dx.doi.org/10.1093/gji/ggab165.

Full text
Abstract:
SUMMARY To shed light on the prevalently slow, aseismic slip interaction between tectonic plates, we developed a new static slip inversion strategy, the ELADIN (ELastostatic ADjoint INversion) method, that uses the adjoint elastostatic equations to compute the gradient of the cost function. ELADIN is a 2-step inversion algorithm to efficiently handle plausible slip constraints. First it finds the slip that best explains the data without any constraint, and then refines the solution by imposing the constraints through a Gradient Projection Method. To obtain a self-similar, physically consistent slip distribution that accounts for sparsity and uncertainty in the data, ELADIN reduces the model space by using a von Karman regularization function that controls the wavenumber content of the solution, and weights the observations according to their covariance using the data precision matrix. Since crustal deformation is the result of different concomitant interactions at the plate interface, ELADIN simultaneously determines the regions of the interface subject to both stressing (i.e. coupling) and relaxing slip regimes. For estimating the resolution, we introduce a mobile checkerboard analysis that allows to determine lower-bound fault resolution zones for an expected slip-patch size and a given stations array. We systematically test ELADIN with synthetic inversions along the whole Mexican subduction zone and use it to invert the 2006 Guerrero Slow Slip Event (SSE), which is one of the most studied SSEs in Mexico. Since only 12 GPS stations recorded the event, careful regularization is thus required to achieve reliable solutions. We compared our preferred slip solution with two previously published models and found that our solution retains their most reliable features. In addition, although all three SSE models predict an upward slip penetration invading the seismogenic zone of the Guerrero seismic gap, our resolution analysis indicates that this penetration might not be a reliable feature of the 2006 SSE.
APA, Harvard, Vancouver, ISO, and other styles
32

Shan, Xiang, Daeyoung Kim, Etsuko Kobayashi, and Bing Li. "Regularized level set models using fuzzy clustering for medical image segmentation." Filomat 32, no. 5 (2018): 1507–12. http://dx.doi.org/10.2298/fil1805507s.

Full text
Abstract:
Level set methods are a kind of general numerical analysis tools that are specialized for describing and controlling implicit interface dynamically. It receives widespread attention in medical image computing and analysis. There have been a lot of level set models designed and regularized for medical image segmentation. For the sake of simplicity and clarity, we merely concentrate on our recent works of regularizing level set methods with fuzzy clustering in this paper. It covers two most famous level set models, namely Hamilton-Jacobi functional and Mumford-Shah functional, for variational segmentation and region competition respectively. The strategies of fuzzy regularization are elaborated in detail and their applications in medical image segmentation are demonstrated with examples.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhu, Hao, and Qing Guo Wei. "Regularized Common Spatial Pattern with Incomplete Generic Learning for EEG Classification in Small Sample Setting." Applied Mechanics and Materials 303-306 (February 2013): 1344–49. http://dx.doi.org/10.4028/www.scientific.net/amm.303-306.1344.

Full text
Abstract:
Common spatial pattern (CSP) is based on sample covariance matrix estimation, whose estimation accuracy is severely overfit with small training sets. To address the drawback, the regularized CSP (R-CSP) was proposed that adds regularization information into the CSP learning process. In the algorithm, all samples of each generic subject were used for training sample covariance matrices. When only a part of the samples of each generic subject are allowed as generic training set, this R-CSP algorithm wouldn’t work. To solve this problem, an improved method is proposed in this paper. The new algorithm was applied to a brain-computer interface (BCI) data set containing five subjects and a mean improvement of 2.5% in classification rate was achieved.
APA, Harvard, Vancouver, ISO, and other styles
34

Weng, Zhifeng, Langyang Huang, and Rong Wu. "Numerical Approximation of the Space Fractional Cahn-Hilliard Equation." Mathematical Problems in Engineering 2019 (April 1, 2019): 1–10. http://dx.doi.org/10.1155/2019/3163702.

Full text
Abstract:
In this paper, a second-order accurate (in time) energy stable Fourier spectral scheme for the fractional-in-space Cahn-Hilliard (CH) equation is considered. The time is discretized by the implicit backward differentiation formula (BDF), along with a linear stabilized term which represents a second-order Douglas-Dupont-type regularization. The semidiscrete schemes are shown to be energy stable and to be mass conservative. Then we further use Fourier-spectral methods to discretize the space. Some numerical examples are included to testify the effectiveness of our proposed method. In addition, it shows that the fractional order controls the thickness and the lifetime of the interface, which is typically diffusive in integer order case.
APA, Harvard, Vancouver, ISO, and other styles
35

Kumar, Ramesh, H. S. Mewara, Shashank Tripathi, and Aditya Kumar Singh Pundir. "An Image Reconstruction Algorithm with Experimental Validation for Electrical Impedance Tomography Imaging Applications." Sensor Letters 18, no. 5 (May 1, 2020): 410–18. http://dx.doi.org/10.1166/sl.2020.4230.

Full text
Abstract:
In the Non-Invasive Bio Impedance Technique (NIBIT) a low current pulse with high frequency inserted between two electrodes of the object while measuring voltages from the other remaining electrodes with respect to the reference electrode. The electrode arrangement is defined in the form of cylindrical shapes of the surface of the phantom. After these arrangements, it inserts the current pulse and measures the voltages according to the selected data acquisition method of bio impedance. The presented algorithm analyzes and defines each obtained data sample from the used phantoms and also allows Image Reconstruction (IR) based on developed Graphical User interface (GUI) on MATLAB. The used IR approach is based on Tikhonov regularization and FEM. The FEM and Tikhonov regularization are mathematical approaches that deal with Forward Problem (FP) and Inverse Problem (IP) of images. In our approach, the FP solution is identified first in order to reconstruct the conductivity distribution through the EIT inverse solution. Thereafter, This FP is solved through the known current pulse of a given conductivity medium. Likewise, the IP is identified and solved through the boundary potential of the object. The end of the obtained result provides a comparable result for the used phantom according to its internal structure. This proposed technique is still reliable despite having some standardization issue according to the procedure.
APA, Harvard, Vancouver, ISO, and other styles
36

Pan, Xinpeng, and Guangzhi Zhang. "Fracture detection and fluid identification based on anisotropic Gassmann equation and linear-slip model." GEOPHYSICS 84, no. 1 (January 1, 2019): R85—R98. http://dx.doi.org/10.1190/geo2018-0255.1.

Full text
Abstract:
Detection of fracture and fluid properties from subsurface azimuthal seismic data improves our abilities to characterize the saturated porous reservoirs with aligned fractures. Motivated by the fracture detection and fluid identification in a fractured porous medium, we have developed a feasible approach to perform a rock physics model-based amplitude variation with offset and azimuth (AVOAz) inversion for the fracture and fluid parameters in a horizontal transversely isotropic (HTI) medium using the PP-wave angle gathers along different azimuths. Based on the linear-slip model, we first use anisotropic Gassmann’s equation to derive the expressions of saturated stiffness components and their perturbations of first-order approximation in terms of elastic properties of an isotropic porous background and fracture compliances induced by a single set of rotationally invariant fractures. We then derive a linearized PP-wave reflection coefficient in terms of fluid modulus, dry-rock matrix term, porosity, density, and fracture compliances or quasi-compliances for an interface separating two weakly HTI media based on the Born scattering theory. Finally, we solve the AVOAz inverse problems iteratively constrained by the Cauchy-sparse regularization and the low-frequency regularization in a Bayesian framework. The results demonstrate that the fluid modulus and fracture quasi-compliances are reasonably estimated in the case of synthetic and real seismic data containing moderate noise in a gas-filled fractured porous reservoir.
APA, Harvard, Vancouver, ISO, and other styles
37

Toure, S., O. Diop, K. Kpalma, and A. S. Maiga. "COASTLINE DETECTION USING FUSION OF OVER SEGMENTATION AND DISTANCE REGULARIZATION LEVEL SET EVOLUTION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W4 (March 6, 2018): 513–18. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w4-513-2018.

Full text
Abstract:
<p><strong>Abstract.</strong> Coastline detection is a very challenging task in optical remote sensing. However the majority of commonly used methods have been developed for low to medium resolution without specification of the key indicator that is used. In this paper, we propose a new approach for very high resolution images using a specific indicator. First, a pre-processing step is carried out to convert images into the optimal colour space (HSV). Then, wavelet decomposition is used to extract different colour and texture features. These colour and texture features are then used for Fusion of Over Segmentation (FOOS) based clustering to have the distinctive natural classes of the littoral. Among these classes are waves, dry sand, wet sand, sea and land. We choose the mean level of high tide water, the interface between dry sand and wet sand, as a coastline indicator. To find this limit, we use a Distance Regularization Level Set Evolution (DRLSE), which automatically evolves towards the desired sea-land border. The result obtained is then compared with a ground truth. Experimental results prove that the proposed method is an efficient coastline detection process in terms of quantitative and visual performances.</p>
APA, Harvard, Vancouver, ISO, and other styles
38

Grach, Eliot Fried German. "An Order-Parameter-Based Theory as a Regularization of a Sharp-Interface Theory for Solid-Solid Phase Transitions." Archive for Rational Mechanics and Analysis 138, no. 4 (August 18, 1997): 355–404. http://dx.doi.org/10.1007/s002050050045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Li, Xiaoyu, Shaobo Li, Peng Zhou, and Guanglin Chen. "Forecasting Network Interface Flow Using a Broad Learning System Based on the Sparrow Search Algorithm." Entropy 24, no. 4 (March 29, 2022): 478. http://dx.doi.org/10.3390/e24040478.

Full text
Abstract:
In this paper, we propose a broad learning system based on the sparrow search algorithm. Firstly, in order to avoid the complicated manual parameter tuning process and obtain the best combination of hyperparameters, the sparrow search algorithm is used to optimize the shrinkage coefficient (r) and regularization coefficient (λ) in the broad learning system to improve the prediction accuracy of the model. Second, using the broad learning system to build a network interface flow forecasting model. The flow values in the time period [T−11,T] are used as the characteristic values of the traffic at the moment T+1. The hyperparameters outputted in the previous step are fed into the network to train the broad learning system network traffic prediction model. Finally, to verify the model performance, this paper trains the prediction model on two public network flow datasets and real traffic data of an enterprise cloud platform switch interface and compares the proposed model with the broad learning system, long short-term memory, and other methods. The experiments show that the prediction accuracy of this method is higher than other methods, and the moving average reaches 97%, 98%, and 99% on each dataset, respectively.
APA, Harvard, Vancouver, ISO, and other styles
40

Vidyasagar, A., S. Krödel, and D. M. Kochmann. "Microstructural patterns with tunable mechanical anisotropy obtained by simulating anisotropic spinodal decomposition." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 474, no. 2218 (October 2018): 20180535. http://dx.doi.org/10.1098/rspa.2018.0535.

Full text
Abstract:
The generation of mechanical metamaterials with tailored effective properties through carefully engineered microstructures requires avenues to predict optimal microstructural architectures. Phase separation in heterogeneous systems naturally produces complex microstructural patterns whose effective response depends on the underlying process of spinodal decomposition. During this process, anisotropy may arise due to advection, diffusive chemical gradients or crystallographic interface energy, leading to anisotropic patterns with strongly directional effective properties. We explore the link between anisotropic surface energies during spinodal decomposition, the resulting microstructures and, ultimately, the anisotropic elastic moduli of the resulting medium. We simulate the formation of anisotropic patterns within representative volume elements, using recently developed stabilized spectral techniques that circumvent further regularization, and present a powerful alternative to current numerical techniques. The interface morphology of representative phase-separated microstructures is shown to strongly depend on surface anisotropy. The effective elastic moduli of the thus-obtained porous media are identified by periodic homogenization, and directionality is demonstrated through elastic surfaces. Our approach not only improves upon numerical tools to simulate phase separation; it also offers an avenue to generate tailored microstructures with tunable resulting elastic anisotropy.
APA, Harvard, Vancouver, ISO, and other styles
41

Zhdanov, Andrey, Talma Hendler, Leslie Ungerleider, and Nathan Intrator. "Inferring Functional Brain States Using Temporal Evolution of Regularized Classifiers." Computational Intelligence and Neuroscience 2007 (2007): 1–8. http://dx.doi.org/10.1155/2007/52609.

Full text
Abstract:
We present a framework for inferring functional brain state from electrophysiological (MEG or EEG) brain signals. Our approach is adapted to the needs of functional brain imaging rather than EEG-based brain-computer interface (BCI). This choice leads to a different set of requirements, in particular to the demand for more robust inference methods and more sophisticated model validation techniques. We approach the problem from a machine learning perspective, by constructing a classifier from a set of labeled signal examples. We propose a framework that focuses on temporal evolution of regularized classifiers, with cross-validation for optimal regularization parameter at each time frame. We demonstrate the inference obtained by this method on MEG data recorded from 10 subjects in a simple visual classification experiment, and provide comparison to the classical nonregularized approach.
APA, Harvard, Vancouver, ISO, and other styles
42

Giraud, Jérémie, Mark Lindsay, and Mark Jessell. "Generalization of level-set inversion to an arbitrary number of geologic units in a regularized least-squares framework." GEOPHYSICS 86, no. 4 (July 1, 2021): R623—R637. http://dx.doi.org/10.1190/geo2020-0263.1.

Full text
Abstract:
We have developed an inversion method for recovery of the geometry of an arbitrary number of geologic units using a regularized least-squares framework. The method addresses cases in which each geologic unit can be modeled using a constant physical property. Each geologic unit or group assigned the same physical property value is modeled using the signed distance to its interface with other units. We invert for this quantity and recover the location of interfaces between units using the level-set method. We formulate and solve the inverse problem in a least-squares sense by inverting for such signed distances. The sensitivity matrix to perturbations of the interfaces is obtained using the chain rule, and model mapping from the signed distance is used to recover the physical properties. Exploiting the flexibility of the framework that we develop allows any number of rock units to be considered. In addition, it allows the design and use of regularization incorporating prior information to encourage specific features in the inverted model. We apply this general inversion approach to gravity data favoring minimum adjustments of the interfaces between rock units to fit the data. The method is first tested using noisy synthetic data generated for a model compoed of six distinct units, and several scenarios are investigated. It is then applied to field data from the Yerrida Basin (Australia) where we investigate the geometry of a prospective greenstone belt. The synthetic example demonstrates the proof of concept of the proposed methodology, whereas the field application provides insights into, and potential reinterpretation of, the tectonic setting of the area.
APA, Harvard, Vancouver, ISO, and other styles
43

Masood, Khalid. "Recovery and regularization of initial temperature distribution in a two-layer cylinder with perfect thermal contact at the interface." Proceedings of the Japan Academy, Series B 82, no. 7 (2006): 224–31. http://dx.doi.org/10.2183/pjab.82.224.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Zhang, Weijian, and Nicholas J. Higham. "Matrix Depot: an extensible test matrix collection for Julia." PeerJ Computer Science 2 (April 6, 2016): e58. http://dx.doi.org/10.7717/peerj-cs.58.

Full text
Abstract:
Matrix Depot is a Julia software package that provides easy access to a large and diverse collection of test matrices. Its novelty is threefold. First, it is extensible by the user, and so can be adapted to include the user’s own test problems. In doing so, it facilitates experimentation and makes it easier to carry out reproducible research. Second, it amalgamates in a single framework two different types of existing matrix collections, comprising parametrized test matrices (including Hansen’s set of regularization test problems and Higham’s Test Matrix Toolbox) and real-life sparse matrix data (giving access to the University of Florida sparse matrix collection). Third, it fully exploits the Julia language. It uses multiple dispatch to help provide a simple interface and, in particular, to allow matrices to be generated in any of the numeric data types supported by the language.
APA, Harvard, Vancouver, ISO, and other styles
45

DONG, YANCHAO, ZHENCHENG HU, and MING XIE. "A MODEL-BASED 3D FACE POSE AND ANIMATION TRACKER WITH AUTO-REGISTRATION USING A BINOCULAR EKF KERNEL." International Journal of Humanoid Robotics 09, no. 02 (June 2012): 1250014. http://dx.doi.org/10.1142/s0219843612500144.

Full text
Abstract:
The recovery of 3D information of a face from two (or more) images has been extensively studied and many interesting applications have been implemented. This paper proposes a model-based 3D face pose and animation tracker with auto-registration using a binocular EKF kernel for real time application like human computer interface. Compared with previous trackers, this tracker is able to automatically register face model, which eliminates the traditional manual registration process. Another major contribution is to use augmented measurement as a regularization strategy for suppressing variables coupling and over-fitting problem. This work also gives a compact factorized mathematic representation of the tracking kernel, which is easier to implement. Both synthetic and real experimental results show significant advantages of the proposed tracker from the aspects of processing speed and reliability compared with the conventional monocular tracker and Levenberg–Marquardt tracker.
APA, Harvard, Vancouver, ISO, and other styles
46

Pan, Guangdong, Lin Liang, and Tarek M. Habashy. "A numerical study of 3D frequency-domain elastic full-waveform inversion." GEOPHYSICS 84, no. 1 (January 1, 2019): R99—R108. http://dx.doi.org/10.1190/geo2017-0727.1.

Full text
Abstract:
We have developed a 3D elastic full-waveform inversion (FWI) algorithm with forward modeling and inversion performed in the frequency domain. The Helmholtz equation is solved with a second-order finite-difference method using an iterative solver equipped with an efficient complex-shifted incomplete LU-based preconditioner. The inversion is based on the minimization of the data misfit functional and a total variation regularization for the unknown model parameters. We implement the Gauss-Newton method as the optimization engine for the inversions. The codes are parallelized with a message passing interface based on the number of shots and receivers. We examine the performance of this elastic FWI algorithm and workflow on synthetic examples including surface seismic and vertical seismic profile configurations. With various initial models, we manage to obtain high-quality velocity images for 3D earth models.
APA, Harvard, Vancouver, ISO, and other styles
47

Noh, S. J., Y. Tachikawa, M. Shiiba, and S. Kim. "Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization." Hydrology and Earth System Sciences Discussions 8, no. 2 (April 4, 2011): 3383–420. http://dx.doi.org/10.5194/hessd-8-3383-2011.

Full text
Abstract:
Abstract. Applications of data assimilation techniques have been widely used to improve hydrologic prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC) methods, known as "particle filters", provide the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response time of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on Markov chain Monte Carlo (MCMC) is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, WEP is implemented for the sequential data assimilation through the updating of state variables. Particle filtering is parallelized and implemented in the multi-core computing environment via open message passing interface (MPI). We compare performance results of particle filters in terms of model efficiency, predictive QQ plots and particle diversity. The improvement of model efficiency and the preservation of particle diversity are found in the lagged regularized particle filter.
APA, Harvard, Vancouver, ISO, and other styles
48

Obukhov, Sergey A., Valery P. Stepanov, and Igor V. Rudakov. "MATHEMATICAL MODEL OF BRAIN-COMPUTER INTERFACE BASED ON THE ANALYSIS OF P300 EVENT RELATED POTENTIALS." RSUH/RGGU Bulletin. Series Information Science. Information Security. Mathematics, no. 2 (2021): 48–67. http://dx.doi.org/10.28995/2686-679x-2021-2-48-67.

Full text
Abstract:
The evoked potentials (EP) method consists in recording bioelectric reactions of the brain in response to external stimulation or while performing cognitive tasks. The goal of the work is to develop a mathematical model of the system for detection and classification of evoked potentials on the electroencephalogram (EEG). The main odd of the machine EP detection are artifacts from EEG recordings and the high variability of potentials. EP detection and classification algorithm includes three stages. At the preliminary stage, the frequency-time and spatial signal transformations – a set of Butterworth frequency filters, linear composition and averaging of the recorded signals from different sensors are used to remove noise and uninformative EEG components. The next step is the direct fixation and averaging of the evoked potentials. At the final stage, to reduce the dimension of the problem, the information features vector is formed. The parameterized image is used as input of the binary classifier. The support vector method is used to construct the classifier. During the study, the optimization of the regularization C parameter of the classifier was carried out using the estimation of sliding control. The proposed solution is useful for human-machine interaction and for medical procedures with biofeedback.
APA, Harvard, Vancouver, ISO, and other styles
49

Calixte, Robine, Ludovic Jason, and Luc Davenne. "Refined and Simplified Simulations for Steel–Concrete–Steel Structures." Applied Mechanics 4, no. 4 (October 18, 2023): 1078–99. http://dx.doi.org/10.3390/applmech4040055.

Full text
Abstract:
Steel–concrete–steel (SCS) sandwich structures have gained increasing interest in new constructions. The external steel plates increase the stiffness, the sustainability, and the strength of the structures under some extreme solicitations. Moreover, the use of these plates as lost prefabricated formwork makes SCS structures modular, enabling higher construction rates. However, for a better understanding of the complex behavior of these structures up to failure, refined numerical simulations are needed to consider various local phenomena, such as concrete crushing in compression and interface interactions. Indeed, the highly non-linear steel–concrete interaction around the dowels is the key point of the composite action. In this contribution, a refined methodology is first proposed and applied on a push-out test. It is especially demonstrated that a regularization technique in compression is needed for the concrete model. Interface elements are also developed and associated with a nonlinear constitutive law between steel connectors and external plates. From this refined methodology, simplified numerical modeling is then deduced and validated. Directly applied to an SCS wall-to-wall junction, this simplified strategy enables the reproduction of the overall behavior, including the elastic phase, the degradation of the system, and the failure mode. The response of each component is particularly analyzed, and the key points of the behavior are highlighted.
APA, Harvard, Vancouver, ISO, and other styles
50

Guerrier, B., H. G. Liu, and C. Be´nard. "Estimation of the Time-Dependent Profile of a Melting Front by Inverse Resolution." Journal of Dynamic Systems, Measurement, and Control 119, no. 3 (September 1, 1997): 574–78. http://dx.doi.org/10.1115/1.2801297.

Full text
Abstract:
The profile and time evolution of a solid/liquid interface in a phase change process is estimated by solving an inverse heat transfer problem, using data measurements in the solid phase only. One then faces the inverse resolution of a heat equation in a variable and a priori unknown 2D domain. This ill-posed problem is solved by a regularization approach: the unknown function (position of the melting front) is obtained by minimization of a two component criterion, consisting of a distance between the output of a simulation model and the measured data, to which a penalizing function is added in order to restore the continuity of the inverse operator. A numerical study is developed to analyze the validity domain of the identification method. From simulation tests, it is shown that the minimum signal/noise ratio that can be handled depends strongly on the position of the measurement sensors.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography