Academic literature on the topic 'Interface regularization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Interface regularization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Interface regularization"

1

Liu, Ji-Chuan. "Shape Reconstruction of Conductivity Interface Problems." International Journal of Computational Methods 16, no. 01 (November 21, 2018): 1850092. http://dx.doi.org/10.1142/s0219876218500925.

Full text
Abstract:
In this paper, we consider a conductivity interface problem to recover the salient features of the inclusion within a body from noisy observation data on the boundary. Based on integral equations, we propose iterative algorithms to detect the location, the size and the shape of the conductivity inclusion. This problem is severely ill-posed and nonlinear, thus we should consider regularization techniques in order to improve the corresponding approximation. We give several examples to show the viability of our proposed reconstruction algorithms.
APA, Harvard, Vancouver, ISO, and other styles
2

POP, IULIU SORIN, and BEN SCHWEIZER. "REGULARIZATION SCHEMES FOR DEGENERATE RICHARDS EQUATIONS AND OUTFLOW CONDITIONS." Mathematical Models and Methods in Applied Sciences 21, no. 08 (August 2011): 1685–712. http://dx.doi.org/10.1142/s0218202511005532.

Full text
Abstract:
We analyze regularization schemes for the Richards equation and a time discrete numerical approximation. The original equations can be doubly degenerate, therefore they may exhibit fast and slow diffusion. In addition, we treat outflow conditions that model an interface separating the porous medium from a free flow domain. In both situations we provide a regularization with a non-degenerate equation and standard boundary conditions, and discuss the convergence rates of the approximations.
APA, Harvard, Vancouver, ISO, and other styles
3

Conde Mones, José Julio, Emmanuel Roberto Estrada Aguayo, José Jacobo Oliveros Oliveros, Carlos Arturo Hernández Gracidas, and María Monserrat Morín Castillo. "Stable Identification of Sources Located on Interface of Nonhomogeneous Media." Mathematics 9, no. 16 (August 13, 2021): 1932. http://dx.doi.org/10.3390/math9161932.

Full text
Abstract:
This paper presents a stable method for the identification of sources located on the separation interface of two homogeneous media (where one of them is contained by the other one), from measurement yielded by those sources on the exterior boundary of the media. This is an ill-posed problem because numerical instability is presented, i.e., minimal errors in the measurement can result in significant changes in the solution. To obtain the proposed stable method the identification problem is categorized into three subproblems, two of which present numerical instability and regularization methods must be applied to obtain their solution in a stable form. To manage the numerical instability due to the ill-posedness of these subproblems, the Tikhonov regularization and sequential smoothing methods are used. We illustrate this methodology in a circular and irregular region to demonstrate the feasibility of the proposed method, which yields convergent and stable solutions for input data with and without noise.
APA, Harvard, Vancouver, ISO, and other styles
4

Du, Jian, Robert D. Guy, Aaron L. Fogelson, Grady B. Wright, and James P. Keener. "An Interface-Capturing Regularization Method for Solving the Equations for Two-Fluid Mixtures." Communications in Computational Physics 14, no. 5 (November 2013): 1322–46. http://dx.doi.org/10.4208/cicp.180512.210313a.

Full text
Abstract:
AbstractMany problems in biology involve gels which are mixtures composed of a polymer network permeated by a fluid solvent (water). The two-fluid model is a widely used approach to described gel mechanics, in which both network and solvent coexist at each point of space and their relative abundance is described by their volume fractions. Each phase is modeled as a continuum with its own velocity and constitutive law. In some biological applications, free boundaries separate regions of gel and regions of pure solvent, resulting in a degenerate network momentum equation where the network volume fraction vanishes. To overcome this difficulty, we develop a regularization method to solve the two-phase gel equations when the volume fraction of one phase goes to zero in part of the computational domain. A small and constant network volume fraction is temporarily added throughout the domain in setting up the discrete linear equations and the same set of equation is solved everywhere. These equations are very poorly conditioned for small values of the regularization parameter, but the multigrid-preconditioned GMRES method we use to solve them is efficient and produces an accurate solution of these equations for the full range of relevant regularization parameter values.
APA, Harvard, Vancouver, ISO, and other styles
5

Torabi, Solmaz, John Lowengrub, Axel Voigt, and Steven Wise. "A new phase-field model for strongly anisotropic systems." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 465, no. 2105 (January 13, 2009): 1337–59. http://dx.doi.org/10.1098/rspa.2008.0385.

Full text
Abstract:
We present a new phase-field model for strongly anisotropic crystal and epitaxial growth using regularized, anisotropic Cahn–Hilliard-type equations. Such problems arise during the growth and coarsening of thin films. When the anisotropic surface energy is sufficiently strong, sharp corners form and unregularized anisotropic Cahn–Hilliard equations become ill-posed. Our models contain a high-order Willmore regularization, where the square of the mean curvature is added to the energy, to remove the ill-posedness. The regularized equations are sixth order in space. A key feature of our approach is the development of a new formulation in which the interface thickness is independent of crystallographic orientation. Using the method of matched asymptotic expansions, we show the convergence of our phase-field model to the general sharp-interface model. We present two- and three-dimensional numerical results using an adaptive, nonlinear multigrid finite-difference method. We find excellent agreement between the dynamics of the new phase-field model and the sharp-interface model. The computed equilibrium shapes using the new model also match a recently developed analytical sharp-interface theory that describes the rounding of the sharp corners by the Willmore regularization.
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Ruohan, and Chunping Ren. "A Novel Combined Regularization for Identifying Random Load Sources on Coal-Rock Structure." Mathematical Problems in Engineering 2023 (April 14, 2023): 1–11. http://dx.doi.org/10.1155/2023/5883003.

Full text
Abstract:
This study discusses the practical engineering problem of determining random load sources on coal-rock structures. A novel combined regularization technique combining mollification method (MM) and discrete regularization (DR), which was called MM-DR technique, was proposed to reconstruct random load sources on coal-rock structures. MM-DR technology is compared with DR, Tikhonov regularization (TR), and maximum entropy regularization (MER) in load reconstruction. The results show that the reconstructed random load sources are more consistent with the real load sources using MM-DR technique combined with particle swarm optimization (PSO) and L-curve method, which was named as PSO-L method, and selecting optimal ω value of kernel function is beneficial to overcome the ill-posed of random load sources reconstruction and to obtain the stable and approximate solutions. The method proposed in this study provides a theoretical basis for the recognition of coal-rock interface.
APA, Harvard, Vancouver, ISO, and other styles
7

Loghin, Daniel. "Preconditioned Dirichlet-Dirichlet Methods for Optimal Control of Elliptic PDE." Analele Universitatii "Ovidius" Constanta - Seria Matematica 26, no. 2 (July 1, 2018): 175–92. http://dx.doi.org/10.2478/auom-2018-0024.

Full text
Abstract:
Abstract The discretization of optimal control of elliptic partial differential equations problems yields optimality conditions in the form of large sparse linear systems with block structure. Correspondingly, when the solution method is a Dirichlet-Dirichlet non-overlapping domain decomposition method, we need to solve interface problems which inherit the block structure. It is therefore natural to consider block preconditioners acting on the interface variables for the acceleration of Krylov methods with substructuring preconditioners. In this paper we describe a generic technique which employs a preconditioner block structure based on the fractional Sobolev norms corresponding to the domains of the boundary operators arising in the matrix interface problem, some of which may include a dependence on the control regularization parameter. We illustrate our approach on standard linear elliptic control problems. We present analysis which shows that the resulting iterative method converges independently of the size of the problem. We include numerical results which indicate that performance is also independent of the control regularization parameter and exhibits only a mild dependence on the number of the subdomains.
APA, Harvard, Vancouver, ISO, and other styles
8

Bosch, Miguel, Penny Barton, Satish C. Singh, and Immo Trinks. "Inversion of traveltime data under a statistical model for seismic velocities and layer interfaces." GEOPHYSICS 70, no. 4 (July 2005): R33—R43. http://dx.doi.org/10.1190/1.1993712.

Full text
Abstract:
We invert large-aperture seismic reflection and refraction data from a geologically complex area on the northeast Atlantic margin to jointly estimate seismic velocities and depths of major interfaces. Our approach combines this geophysical data information with prior information on seismic compressional velocities and the structural interpretation of seismic sections. We constrain expected seismic velocities in the prior model using information from well logs from a nearby area. The layered structure and prior positions of the interfaces follow information from the seismic section obtained by processing the short offsets. Instead of using a conventional regularization technique to smooth the interface-velocity model, we describe the spatial correlation of interfaces and velocities with a geostatistical model, using a multivariate Gaussian probability density function. We impose a covariance function on the velocity field in each layer and on each interface in the model to control the smoothness of the solution. The inversion is performed by minimizing an objective function with two terms, one term measuring traveltime residuals and the other measuring the fit to the statistical model. We calculate the posterior uncertainties and evaluate the relative influence of data and the prior model on estimated interface depths and seismic velocities. The method results in the estimation of velocity and interface geometry beneath a basaltic sill system down to 7 km depth. This method aims to enhance the interpretation process by combining multidisciplinary information in a quantitative model-based approach.
APA, Harvard, Vancouver, ISO, and other styles
9

Chaboche, J. L., F. Feyel, and Y. Monerie. "Interface debonding models: a viscous regularization with a limited rate dependency." International Journal of Solids and Structures 38, no. 18 (May 2001): 3127–60. http://dx.doi.org/10.1016/s0020-7683(00)00053-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rani, Pooja, Rajneesh Kumar, and Anurag Jain. "An Intelligent System for Heart Disease Diagnosis using Regularized Deep Neural Network." Journal of Applied Research and Technology 21, no. 1 (February 27, 2023): 87–97. http://dx.doi.org/10.22201/icat.24486736e.2023.21.1.1544.

Full text
Abstract:
Heart disease is one of the deadly diseases. Timely detection of the disease can prevent mortality. In this paper, an intelligent system is proposed for the diagnosis of heart disease using clinical parameters at early stages. The system is developed using the Regularized Deep Neural Network model (Reg-DNN). Cleveland heart disease dataset has been used for training the model. Regularization has been achieved by using dropout and L2 regularization. Efficiency of Reg-DNN was evaluated by using hold-out validation method.70% data was used for training the model and 30% data was used for testing the model. Results indicate that Reg-DNN provided better performance than conventional DNN. Regularization has helped to overcome overfitting. Reg-DNN has achieved an accuracy of 94.79%. Results achieved are quite promising as compared to existing systems in the literature. Authors developed a system containing a graphical user interface. So, the system can be easily used by anyone to diagnose heart disease using the clinical parameters.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Interface regularization"

1

ANILE, LORENA HELENA DOS SANTOS S. "TO FORMALIZE THE LAND? ANALYSIS OF THE IMPACTS OF THE LAND REGULARIZATION PROGRAMS IN THE RIO DE JANEIRO FAVELAS AND THEIR INTERFACE WITH URBAN INFORMALITY." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2018. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=34882@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
A população moradora das favelas cria estratégias para sua manutenção nesses locais. Longe de afirmar que a informalidade seria uma solução para todos os problemas da favela, a presente dissertação compreende esta prática como funcional. Contrapondo a isto, a Regularização Fundiária é entendida como a legalização das propriedades das áreas informais. Entretanto, o que se vê, na maioria das favelas cariocas que receberam estes projetos, é a entrega de um título fragilizado, que não garante a permanência da população, isto quando há efetivamente a entrega do título. O tema proposto por esta dissertação é a Regularização Fundiária em contraste com a informalidade urbana encontrada nas favelas cariocas. Analisamos a informalidade urbana como um ordenamento diferenciado no meio urbano, portanto, ela não deve ser encarada como um problema que pode ser solucionado pela Regularização Fundiária. Para compreender a inserção da Regularização Fundiária como Programa governamental nas favelas cariocas, destacamos três favelas como campo de pesquisa: Rocinha, Cantagalo e Acari (Vila Rica e Vila Esperança), todas com inserção governamental e desfechos distintos. Assim, para alcançar o objetivo central desta dissertação, buscamos o aprofundamento nos temas principais por meio da pesquisa bibliográfica, do levantamento documental sobre os programas de regularização fundiária e a realização de entrevistas com lideranças comunitárias, gestores dos programas locais de regularização fundiária e agentes governamentais. Buscamos aprofundar a temática da Regularização Fundiária e as suas variadas vertentes, observando os desafios enfrentados para garantir à população moradora das favelas o direito à cidade.
The slum leaving population create strategies for their maintenance in these environments. Far from stating that informality would be a solution to all slum problems, the present dissertation understands it as a functional practice. Contrary to it, land regularization is understood as of the properties legalization from informal areas. However, what is seen in most slums in Rio de Janeiro that received these projects, is a fragile deed delivered that does not guarantee the population permanence, when, in fact, there is an actual delivery. The theme proposed by this dissertation is Land Regularization in contrast to the urban informality found in the Rio de Janeiro slums. It has been analyzed urban informality as a differentiated urban planning, therefore, it should not be seen as a problem that can be solved by land regularization. In order to understand the inclusion of land regularization as a government program in Rio de Janeiro slums, three slums were studied as field of research: Rocinha, Cantagalo and Acari (Vila Rica and Vila Esperança), all with governmental insertion and different outcomes. Thus, in order to reach the main objective, we seek to deepen the main themes through bibliographical research, documentary survey of land regularization programs and interviews with community leaders, managers of local land regularization programs and government agents. We aim to deepen the theme of land regularization and its various aspects, observing the challenges faced to guarantee the population living in the slums the right to the city.
APA, Harvard, Vancouver, ISO, and other styles
2

Loison, Arthur. "Unified two-scale Eulerian multi-fluid modeling of separated and dispersed two-phase flows." Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAX009.

Full text
Abstract:
Les écoulements diphasiques liquide-gaz sont présents dans de nombreuses applications industrielles telles que la propulsion aérospatiale, l'hydraulique nucléaire ou les colonnes à bulles dans l'industrie chimique.La simulation de ces écoulements est d'un intérêt primordial pour leur compréhension et leur optimisation.Cependant, la dynamique de l'interface séparant le gaz du liquide peut avoir une dynamique multi-échelle et rend alors sa simulation trop coûteuse en calcul dans un contexte industriel.Une classe de modèles - dits multi-fluides - sont moins coûteux pour des régimes particuliers de dynamique d'interface, par exemple lorsque les fluides s'écoulent de part et d'autre d'une unique interface lisse dans un régime séparé ou lorsque l'un des deux fluides est sous formes d'inclusions (gouttes ou bulles) portées par l'autre fluide dans un régime dispersé.Le couplage de ces modèles a été proposé pour des écoulements multi-échelles comme l'atomisation liquide, mais un tel couplage est souvent difficile à mettre en place du point de vue de la modélisation physique ou de ses propriétés mathématiques.Cette thèse répond à cette problématique en proposant un cadre de modélisation unifiée à deux échelles ainsi que des schémas numériques robustes.Les principales contributions liées à cette modélisation sont :1- La combinaison de modèles multi-fluides compressibles de la littérature, adaptés soit au régime séparé soit au régime dispersé, en un modèle multi-fluide unifié à deux échelles grâce au principe d'action stationnaire de Hamilton ;2- Le couplage local des modèles avec un transfert de masse inter-échelle régularisant l'interface à grande échelle en conservant l'énergie capillaire et modélisant les phénomènes de régime mixte présents dans l'atomisation primaire ;3- L'amélioration des modèles à petite échelle pour les régimes dispersés en ajoutant la dynamique de quantités géométriques pour des gouttes oscillantes ou des bulles pulsantes, construites comme des moments d'une description cinétique.D'un point de vue numérique, des schémas volumes-finis adaptés aux systèmes de lois de conservation avec relaxations ont été implémentés dans le solveur open-source Josiepy.Enfin, des simulations démonstratives des propriétés de régularisation du modèle sont proposées sur des configurations numériques conduisant à des dynamiques d'interface multi-échelles
Liquid-gas two-phase flows are present in numerous industrial applications such as aerospace propulsion, nuclear hydraulics or bubble column reactors in the chemical industry.The simulation of such flows is of primary interest for their understanding and optimization.However, the dynamics of the interface separating the gas from the liquid can present a multiscale dynamics and thus makes simulations of industrial processes computationally too expensive.Some modelling efforts have been conducted on the development of cheaper multi-fluid models adapted to particular interface dynamics regime, e.g. in the separated regime where the fluids are separated by a single smooth surface or in the disperse regime where there are inclusions of one fluid carried by the other.Attempts of coupling between these models have showed some progress to simulate multiscale flows like atomization, but usually have physical or mathematical drawbacks.This thesis then pursues the goal of proposing a unified two-scale modelling framework with appropriate numerical methods adapted to this multiscale interface dynamics which goes from a separated to a disperse regime.The main contributions related to this modelling effort are :1- The combination of compressible multi-fluid models of the literature adapted to either the separated or the disperse regime into a unified two-scale multi-fluid model relying on Hamilton’s Stationary Action Principle;2- The local coupling of the models with an inter-scale mass transfer both regularizing the large-scale inter face and modelling mixed regime phenomena such as in primary break-up;3- Enhancing the small-scale models for the disperse regimes by adding the dynamics of geometrical quantities for oscillating droplets and pulsating bubbles, built as moments of a kinetic description.From the numerical perspective, finite-volume schemes and relaxation methods are used to solve the system of conservative laws of the models.Eventually, simulations with the open-source finite solver Josiepy demonstrates the regularization properties of the model on a set of well-chosen numerical setups leading to multi-scale interface dynamics
APA, Harvard, Vancouver, ISO, and other styles
3

Proulx, Louis-Xavier. "Étude numérique et asymptotique d'une approche couplée pour la simulation de la propagation de feux de forêt avec l'effet du vent en terrain complexe." Thèse, 2016. http://hdl.handle.net/1866/20586.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Interface regularization"

1

Sanchez, Justin C., and José C. Principe. "Regularization Techniques for BMI Models." In Brain-Machine Interface Engineering, 99–140. Cham: Springer International Publishing, 2007. http://dx.doi.org/10.1007/978-3-031-01621-9_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Boutin, B., F. Coquel, and E. Godlewski. "Dafermos Regularization for Interface Coupling of Conservation Laws." In Hyperbolic Problems: Theory, Numerics, Applications, 567–75. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-75712-2_55.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Thang, Le Quoc, and Chivalai Temiyasathit. "Investigation of Regularization Theory for Four-Class Classification in Brain-Computer Interface." In Future Data and Security Engineering, 275–85. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-12778-1_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Araújo, Rogério Palhares Zschaber de, Ana Clara Mourão Moura, and Thaisa Daniele Apóstolo Nogueira. "Creating Collaborative Environments for the Development of Slum Upgrading and Illegal Settlement Regularization Plans in Belo Horizonte, Brazil." In Advances in Electronic Government, Digital Divide, and Regional Development, 86–112. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-9090-4.ch005.

Full text
Abstract:
Slum upgrading comprehensive plans and urban regularization plans are two planning tools used to promote integrated interventions in Brazilian slums and illegal settlements. Aiming at urban improvements as well as land regularization and community development, these plans have been, however, criticized for being too technical, time consuming, expensive, and top-down oriented, lacking sufficient participation to achieve community consensus on priorities, under severe budget restrictions in complex and fast changing realities. This chapter discusses the results of recent experiences in Belo Horizonte, Brazil using Geodesign framework and geovisualization strategies for collaborative planning in two illegal settlements: Maria Tereza (2016) and the Dandara (2018). A methodology for regularization plans was developed, and improvements to the Geodesign interface were tested with the use of open source Web Maps. Both experiences brought evidence on how the use of the proposed framework may enhance citizens' participation and improve planning methods based on social and community values.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Interface regularization"

1

Zoumpourlis, Georgios, and Ioannis Patras. "CovMix: Covariance Mixing Regularization for Motor Imagery Decoding." In 2022 10th International Winter Conference on Brain-Computer Interface (BCI). IEEE, 2022. http://dx.doi.org/10.1109/bci53720.2022.9734883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Viriyavisuthisakul, Supatta, Parinya Sanguansat, and Toshihiko Yamasaki. "A Web Demo Interface for Super-Resolution Reconstruction with Parametric Regularization Loss." In ICMR '24: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3652583.3657591.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Okamoto, Kei, and Ben Q. Li. "Inverse Design of Solidification Processes." In ASME 2004 International Mechanical Engineering Congress and Exposition. ASMEDC, 2004. http://dx.doi.org/10.1115/imece2004-59449.

Full text
Abstract:
An inverse algorithm is developed for the design of the solidification processing systems. The algorithm entails the use of the Tikhonov regularization method, along with an appropriately selected regularization parameter. Both the direct solution of moving boundary problems and the inverse design formulation are presented, along with the L-curve to select an optimal regularization parameter for inverse design calculations. The design algorithm is applied to determine the appropriate boundary heat flux distribution in order to obtain a unidirectional solidification front in a 2-D cavity by eliminating the effect of natural convection. Inverse calculation is also performed for the case in which the solid-liquid interface is prescribed to vary linearly. The L-curve based regularization method is found to be reasonably accurate for the purpose of designing solidification processing systems.
APA, Harvard, Vancouver, ISO, and other styles
4

Brake, M. R., M. J. Starr, and D. J. Segalman. "Modeling Constrained Layer Frictional Interfaces With Discontinuous Basis Functions." In ASME 2011 International Mechanical Engineering Congress and Exposition. ASMEDC, 2011. http://dx.doi.org/10.1115/imece2011-62441.

Full text
Abstract:
Constrained layer frictional interfaces, such as joints, are prevalent in engineering applications. Because these interfaces are often used in built-up structures, reduced order modeling techniques are utilized for developing simulations of them. One limitation of the existing reduced order modeling techniques, though, is the loss of the local kinematics due to regularization of the frictional interfaces. This paper aims to avoid the use of regularization in the modeling of constrained layer frictional interfaces by utilizing a new technique, the discontinuous basis function method. This method supplements the linear mode shapes of the system with a series of discontinuous basis functions that are used to account for nonlinear forces acting on the system. A symmetric, constrained layer frictional interface is modeled as a continuous system connected to two rigid planes by a series of Iwan elements. This symmetric model is used to test the hypothesis that symmetric problems are not subjected to the range of variability seen in physical structures, which have non-uniform pressure and friction distributions. Insights from solving the symmetric problem are used to consider the case where a non-uniform distribution of friction and pressure exists.
APA, Harvard, Vancouver, ISO, and other styles
5

Okamoto, Kei, and Ben Q. Li. "Inverse Design of Time Dependent Solidification Processes." In ASME 2005 Summer Heat Transfer Conference collocated with the ASME 2005 Pacific Rim Technical Conference and Exhibition on Integration and Packaging of MEMS, NEMS, and Electronic Systems. ASMEDC, 2005. http://dx.doi.org/10.1115/ht2005-72556.

Full text
Abstract:
An inverse algorithm is developed for the design of the solidification processing systems. The algorithm entails the use of the Tikhonov regularization method, along with an appropriately selected regularization parameter. Both the direct solution of moving boundary problems and the inverse design formulation are presented, along with the L-curve method to select an optimal regularization parameter for inverse design calculations. The design algorithm is applied to determine the optimal boundary heat flux distribution in order to obtain a unidirectional solidification front moving at a constant velocity in a 2-D cavity by eliminating the effect of natural convection. The inverse calculation is also performed for the case in which the solid-liquid interface is prescribed to vary with sine functions. The L-curve based regularization method is found to be reasonably accurate for the purpose of designing time dependent solidification processing systems.
APA, Harvard, Vancouver, ISO, and other styles
6

Masat, Alessandro, Camilla Colombo, and Arnaud Boutonnet. "GPU-based Augmented Trajectory Propagation: orbital regularization interface and NVIDIA CUDA Tensor Core performance." In ESA 12th International Conference on Guidance Navigation and Control and 9th International Conference on Astrodynamics Tools and Techniques. ESA, 2023. http://dx.doi.org/10.5270/esa-gnc-icatt-2023-199.

Full text
Abstract:
Several analysis tasks feature the propagation of large sets of trajectories, ranging from monitoring the space debris environment to assessing the compliance of space missions with planetary protection policies. The increasingly stringent accuracy requirements inevitably make high-fidelity analyses become more computationally intensive, easily reaching the need of propagating hundreds of thousands of trajectories for one single task. For this reason, high-performance computing (HPC) and GPU (Graphics Processing Unit) computing techniques become one of the enabling technologies that allow the execution of this kind of analyses. The latter, given its accessible cost and hardware implementation, has increasingly been adopted in the past decade: more modern and powerful graphic cards are launched in the market every year, and new GPU-dedicated algorithms are continuously built and adopted: for reference, GPUs are the technology which the training of the most known artificial intelligence models is made upon. Reference tools for the astrodynamics community are represented by SNAPPshot [1] and CUDAjectory [2]: they both aim at achieving efficient propagations, the former being a CPU-based software suited for planetary protection analyses, the latter being a high-fidelity and efficiency GPU ballistic propagator. Both software work on a traditional step-based logic, that takes initial states and studies their step-by-step evolution in time. The proposed work builds on previously obtained results [3], proposing an alternative algorithm logic specifically designed for HPC and GPU computing, in order to extract all the possible performance from these computational architectures. In contrast to traditional, step-based, numerical schemes, the Picard-Chebyshev (PC) method starts the integration process from samples of a trajectory guess, which are iteratively updated until the supplied dynamical model is matched. The core of this numerical scheme is, other than the evaluation of the dynamics function, a sequence of matrix multiplications: this feature makes the method, in principle, highly suitable for parallel and GPU computing. However, the limited number of trajectory nodes required to reach high accuracy levels (100-200) hinders the parallel efficiency of the algorithm. In other words, the parallel overhead outweighs the possible acceleration, for systems this small. In [3], an augmented version of the basic Picard-Chebyshev simulation scheme for the propagation of large sets of trajectories is proposed. Instead of integrating, either sequentially or in parallel, each trajectory individually, an augmented dynamical system collecting all the samples is built and fed to the PC scheme. This approach outperforms the individual simulations in any parallelisation case, and its GPU implementation is observed to run faster already on low-end graphics cards, compared to a 40-core CPU cluster. This work introduces and implements the latest updated version of the PC scheme, which features iteration error feedback and second-order dynamics adaptability for improved iteration efficiency [4]. These adaptations contribute to reduce the computational time by a factor four, because of the reduced number of iterations required to converge. In addition, the algorithm is adapted to the newest generation NVIDIA graphics card, also exploiting the novel Tensor Core architecture for double precision computation, building an updated GPU software that overall is 50-100 times faster than its original version. Finally, an interface for the proposed scheme for regularised formulations (e.g., [5]) is proposed, aiming at improving the software robustness in tackling near-singular and sets of divergent trajectories. Performance and accuracy comparisons, in terms of number of trajectory samples required by the PC scheme, against the standard Cartesian propagation case are presented. Regularized formulations require a lower amount of trajectory samples to reach a given relative error threshold, compared to the Cartesian case, resulting in turn to a notable decrease in computational runtime. These improved software capabilities are tested in several critical case scenarios, proposing a complete analysis of close encounters, encompassing deep, shallow, and impacting flybys, in the Circular Restricted Three Body problem. Here, a further advantage of regularized formulations comes into play: the impact singularity featuring the gravitational model is removed by construction, making it feasible to treat impacting trajectories and shallow encounters in a single common augmented propagation. [1]Colombo C., Letizia F., Van Der Eynde J., “SNAPPshot ESA planetary protection compliance verification software Final report V1.0, Technical Report ESA-IPL-POM-MB-LE-2015- 315,” University of Southampton, Tech. Rep., 2016 [2]Geda M., Noomen R., Renk F., “Massive Parallelization of Trajectory Propagations using GPUs”, 2019, Master’s thesis, Delft University of Technology, http://resolver.tudelft.nl/uuid:1db3f2d1-c2bb-4188-bd1e-dac67bfd9dab [3]Masat A., Colombo C., Boutonnet A., “GPU-based high-precision orbital propagation of large sets of initial conditions through Picard-Chebyshev augmentation”, 2023, Acta Astronautica, https://doi.org/10.1016/j.actaastro.2022.12.037 [4]Woollands R., Junkins J. L., “Nonlinear differential equation solvers via adaptive Picard-Chebyhsev iteration: application in astrodynamics”, 2019, Journal of Guidance, Control, and Dynamics, https://doi.org/10.2514/1.G003318 [5]Masat A., Colombo C., “Kustaanheimo-Stiefel variables for planetary protection compliance analysis”, 2022, Journal of Guidance, Control, and Dynamics, https://doi.org/10.2514/1.G006255
APA, Harvard, Vancouver, ISO, and other styles
7

Lamarche-Gagnon, Marc-Étienne, Farshad Navah, Florin Ilinca, Marjan Molavi-Zarandi, and Vincent Raymond. "A Comparative Study Between a Sharp and a Diffuse Topology Optimization Method for Thermal Problems." In ASME 2021 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/imece2021-72861.

Full text
Abstract:
Abstract The objective of this work is to compare two topology optimization strategies, i.e. density-based (diffuse) and level-set-based (sharp), in thermal problems involving a heat conductor and an insulation material. The fundamental difference between the two methods lies in the representation of the materials’ interface: the density method allows for transitional regions whereas the level set one does not. Several regularization techniques, such as perimeter restriction, parameter ramping, level set gradient restriction and parametrization, are explored in order to enhance each method’s robustness and to decrease its sensitivity to initial conditions. It is shown that, in the two test problems investigated, the diffuse method was in general more robust than the sharp one. However, when combined with appropriate regularization techniques, the level set method lead to material distributions which were more optimal.
APA, Harvard, Vancouver, ISO, and other styles
8

Brown, Lucy, Suhas Jain, and Parviz Moin. "A Phase Field Model for Simulating the Freezing of Supercooled Liquid Droplets." In International Conference on Icing of Aircraft, Engines, and Structures. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2023. http://dx.doi.org/10.4271/2023-01-1454.

Full text
Abstract:
<div class="section abstract"><div class="htmlview paragraph">In this work, ice accretion is investigated on a fundamental level using a novel Eulerian phase field approach that captures the phase interface. This method, unlike the Allen-Cahn method, does not lead to spurious phase change (artificial mass loss). This method is also straightforward to implement and avoids normal vector reconstructions along the interface or ghost cells. Additionally, it has well-defined and novel stiffness constraints for accuracy and stability that define parameters in the model such as the kinetic coefficient <i>μ</i> and the interface regularization coefficient <i>γ</i>. An incompressible solver is constructed and used to verify the new method using an analytical Stefan problem solution in both 1D and 2D domains.</div></div>
APA, Harvard, Vancouver, ISO, and other styles
9

Gill, Jennifer, Eduardo Divo, and Alain J. Kassab. "Estimating Thermal Contact Resistance Using Sensitivity Analysis." In ASME 2004 International Mechanical Engineering Congress and Exposition. ASMEDC, 2004. http://dx.doi.org/10.1115/imece2004-62230.

Full text
Abstract:
Characterization of the thermal contact resistance is important in modeling of multi-component thermal systems which feature mechanically mated surfaces. Thermal resistance is phenomenologically quite complex and depends on many parameters including surface characteristics of the interfacial region and contact pressure. In general, the contact resistance varies as a function of pressure and is non-uniform along the interface. A two dimensional model problem is solved analytically for a known contact resistance between two mated surfaces. The results from the analytical solution are compared with a boundary element solution to the same problem, thus verifying the implementation of the boundary element method code. An inverse problem is formulated to estimate the variation of the contact resistance by using a boundary element method to determine sensitivity coefficients for specific temperature measurement points in the geometry. Temperature measured at these discrete locations can be processed to yield the contact resistance between the two mating surfaces using a simple matrix inversion technique. The inversion process is sensitive to noise and requires using a regularization technique to obtain physically possible results. The regularization technique is then extended to a genetic algorithm for performing the inverse analysis. Numerical simulations are carried out to demonstrate the approach. Random noise is used to simulate the effect of input uncertainties in measured temperatures at the sensors.
APA, Harvard, Vancouver, ISO, and other styles
10

Zabaras, Nicholas J. "On the Design of Continuum Thermal Transport Systems With Applications to Solidification Processes." In ASME 2001 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2001. http://dx.doi.org/10.1115/imece2001/htd-24335.

Full text
Abstract:
Abstract A general class of design problems is examined in thermal transport systems. We mainly address inverse problems with overspecified coupled boundary conditions in one part of the boundary and unknown thermal conditions in another part of the boundary. The methods of choice for the solution of the above inverse problems are functional optimization methods using appropriately defined continuum sensitivity and adjoint problems. Conjugate gradient techniques, preconditioning and regularization are considered within an innovative object-oriented finite element framework. Our particular interest for examining such inverse transport systems arises from the desire to address the design of directional solidification processes that lead to desired microstructures. As the main application of this paper, we will address the calculation of the mold/furnace heat flux conditions such that a desired solidification state (growth rate and temperature gradient) is achieved at the freezing interface. Changes in growth rate and thermal gradient at the freezing interface are known to alter the relative importance of thermal/mass transport and interfacial energy effects, and the magnitude of this partitioning of available driving forces dictates the formation of specific microstructures.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography