Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Smoothed Finite Element.

Dissertationen zum Thema „Smoothed Finite Element“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-34 Dissertationen für die Forschung zum Thema "Smoothed Finite Element" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Bhowmick, Sauradeep. „Advanced Smoothed Finite Element Modeling for Fracture Mechanics Analyses“. University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1623240613376967.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Wang, Sili. „An ABAQUS Implementation of the Cell-based Smoothed Finite Element Method Using Quadrilateral Elements“. University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1416233762.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Zeng, Wei. „Advanced Development of Smoothed Finite Element Method (S-FEM) and Its Applications“. University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1439309306.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Palmerini, Claudia. „On the smoothed finite element method in dynamics: the role of critical time step for linear triangular elements“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Den vollen Inhalt der Quelle finden
Annotation:
Il metodo agli elementi finiti (FEM) è molto utilizzato per risolvere problemi strutturali in diversi ambiti dell’ingegneria. Negli anni, è stata sviluppata una famiglia di nuovi metodi ottenuta combinando il FEM standard con la tecnica “strain smoothing”, giungendo ai cosiddetti “smoothed finite element method” (SFEM). In questa tesi, l’attenzione è stata concentrata sul node-based SFEM (NS-FEM) e sull'edge-based SFEM (ES-FEM), che appartengono a questa nuova famiglia di metodi. Dopo una literature review, volta a metterne in luce le proprietà e gli aspetti fondamenti, i due metodi sono stati confrontati con il FEM standard. L'implementazione dei due metodi è stata eseguita con il software MATLAB. Lo studio è stato fatto in ambito dinamico, utilizzando due metodi di integrazione numerica nel tempo: il metodo delle differenze centrali e il metodo di Runge-Kutta. Come problema test è stato studiato il problema delle vibrazioni libere di un elemento strutturale in stato piano di tensione. Il confronto è stato portato avanti su due fronti: il costo computazionale dei metodi ed il calcolo del “critical time step”. I risultati hanno mostrato che il NS-FEM e l'ES-FEM hanno un costo maggiore rispetto al FEM standard, mentre, lato critical time step, sono paragonabil al FEM standard.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Duong, Minh Tuan [Verfasser], Dieter [Akademischer Betreuer] Weichert und Mikhail [Akademischer Betreuer] Itskov. „Hyperelastic Modeling and Soft-Tissue Growth Integrated with the Smoothed Finite Element Method-SFEM / Minh Tuan Duong ; Dieter Weichert, Mikhail Itskov“. Aachen : Universitätsbibliothek der RWTH Aachen, 2015. http://d-nb.info/1129364747/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Pringgana, Gede. „Improving resilience of coastal structures subject to tsunami-like waves“. Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/improving-resilience-of-coastal-structures-subject-to-tsunamilike-waves(7fd556e2-0202-48ea-a8bf-39582f9c4c7b).html.

Der volle Inhalt der Quelle
Annotation:
This thesis investigates tsunami impact on shore-based, low-rise structures in coastal areas. The aims are to investigate tsunami wave inundation in built-up coastal areas with reference to structural response to wave inundation, to assess the performance of current design codes in comparison with validated state-of-the-art numerical models and to improve structural design of residential buildings in tsunami risk areas. Tsunami events over the past few decades have shown that a significant proportion of fatalities can be attributed to the collapse of building infrastructure due to various actions of the incident waves. Although major tsunami events have demonstrated the potential catastrophic effects on built infrastructure, current building codes have no detailed or consistent guidance on designing structures in tsunami-prone regions. Furthermore, considerable differences in existing empirical formulae highlight that new research is necessary to appropriately address the particularities of the tsunami-induced forces and structure response into the design standards. In this thesis, numerical modelling methods are used to simulate hydrodynamic impact on shore-based coastal structures. The hydrodynamic simulations were conducted using a novel meshless numerical method, smoothed particle hydrodynamics (SPH), which is coupled with the finite element (FE) method to model structural behaviour. The SPH method was validated with experimental data for bore impact on an obstacle using a convergence study to identify the optimum particle size to capture the hydrodynamics. The FE model was validated against experimental data for plates under transient blast loads which have similar load characteristics with impulsive tsunami-induced bore impacts. One of the contributions of the thesis is the use of a new coupling method of the SPH-based software DualSPHysics and FE-based software ABAQUS. Using SPH particle spacing of the same size as the FE mesh size, enables the SPH output pressure to be directly applied as an input to the structural response model. Using this approach the effects of arrangement and orientation of single and multiple low rise structures are explored. Test cases were performed in 2-D and 3-D involving a discrete structure and multiple structures. The 3-D SPH simulations with single and multiple structures used an idealised coastal structure in the form of a cube with different on-plan orientations (0°, 30°, 45° and 60°) relative to the oncoming bore direction. The single structure cases were intended to study the improvement of the resilience of coastal structures by reducing the acting pressures on the vertical surfaces by changing the structure’s orientation. It was found the pressure exerted on the vertical surface of structure can be reduced by up to 50% with the 60° orientation case. The multiple structure models were conducted to examine shielding and flow focusing phenomena in tsunami events. The results reveal that the distance between two adjacent front structures can greatly influence the pressure exerted on the rear structure. This thesis also demonstrates the capability of SPH numerical method in simulating standard coastal engineering problems such as storm waves impact on a recurve wall in 2-D. The idealised structures were represented as standard timber construction and the finite element modelling was used to determine the corresponding stress distributions under tsunami impact. Following the comparison of the method used in this thesis with commonly used design equations based on the quasi-static approach, large differences in stress prediction were observed. In some cases the loads according to the design equations predicted maximum stresses almost one order of magnitude lower. This large discrepancy clearly shows the potential for non-conservative design by quasi-static approaches. The new model for the simulation of tsunami impact on discrete and multiple structures shows that the resilience of a coastal structure can be improved by changing the orientation and arrangement. The characteristics of tsunami waves during propagation and bore impact pressures on structures can be assessed in great detail with the combined SPH and FE modelling strategy. The techniques outlined in this thesis will enable engineers to gain a better insight into tsunami wave-structure interaction with a view towards resilience optimisation of structures vulnerable to tsunami impact events.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Yan, Yinzhou. „High-quality laser machining of alumina ceramics“. Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/highquality-laser-machining-of-alumina-ceramics(3dd60fb6-5bda-4cc9-8f00-f49b170ca6aa).html.

Der volle Inhalt der Quelle
Annotation:
Alumina is one of the most commonly used engineering ceramics for a variety of applications ranging from microelectronics to prosthetics due to its desirable properties. Unfortunately, conventional machining techniques generally lead to fracture, tool failure, low surface integrity, high energy consumption, low material removal rate, and high tool wear during machining due to high hardness and brittleness of the ceramic material. Laser machining offers an alternative for rapid processing of brittle and hard engineering ceramics. However, the material properties, especially the high thermal expansion coefficient and low thermal conductivity, may cause ceramic fracture due to thermal damage. Striation formation is another defect in laser cutting. These drawbacks limit advanced ceramics in engineering applications. In this work, various lasers and machining techniques are investigated to explore the feasibility of high-quality laser machining different thicknesses of alumina. The main contributions include: (i) Fibre laser crack-free cutting of thick-section alumina (up to 6-mm-thickness). A three-dimensional numerical model considering the material removal was developed to study the effects of process parameters on temperature, thermal-stress distribution, fracture initiation and propagation in laser cutting. A rapid parameters optimisation procedure for crack-free cutting of thick-section ceramics was proposed. (ii) Low power CW CO2 laser underwater machining of closed cavities (up to 2-mm depth) in alumina was demonstrated with high-quality in terms of surface finish and integrity. A three-dimensional thermal-stress model and a two-dimensional fluid smooth particle hydrodynamic model (SPH) were developed to investigate the physical processes during CO2 laser underwater machining. SPH modelling has been applied for the first time to studying laser processing of ceramics. (iii) Striation-free cutting of alumina sheets (1-mm thickness) is realised using a nano-second pulsed DPSS Nd: YAG laser, which demonstrates the capability of high average power short pulsed lasers in high-quality macro-machining. A mechanism of pulsed laser striation-free cutting was also proposed. The present work opens up new opportunities for applying lasers for high-quality machining of engineering ceramics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Yang, Qing. „SPH Simulation of Fluid-Structure Interaction Problems with Application to Hovercraft“. Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/26785.

Der volle Inhalt der Quelle
Annotation:
A Computational Fluid Dynamics (CFD) tool is developed in this thesis to solve complex fluid-structure interaction (FSI) problems. The fluid domain is based on Smoothed Particle Hydro-dynamics (SPH) and the structural domain employs large-deformation Finite Element Method (FEM). Validation tests of SPH and FEM are first performed individually. A loosely-coupled SPH-FEM model is then proposed for solving FSI problems. Validation results of two benchmark FSI problems are illustrated (Antoci et al., 2007; Souto-Iglesias et al., 2008). The first test case is flow in a sloshing tank interacting with an elastic body and the second one is dam-break flow through an elastic gate. The results obtained with the SPH-FEM model show good agreement with published results and suggest that the SPH-FEM model is a viable and effective numerical tool for FSI problems. This research is then applied to simulate a two-dimensional free-stream flow interacting with a deformable, pressurized surface, such as an ACV/SES bow seal. The dynamics of deformable surfaces such as the skirt/seal systems of the ACV/SES utilize the large-deformation FEM model. The fluid part including the air inside the chamber and water are simulated by SPH. A validation case is performed to investigate the application of SPH-FEM model in ACV/SES via comparison with experimental data (Zalek and Doctors, 2010). The thesis provides the theory of the SPH and FEM models incorporated and the derivation of the loosely-coupled SPH-FEM model. The validation results have suggested that this SPH-FEM model can be readily applied to skirt/seal dynamics of ACV/SES interacting with free-surface flow.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Apel, Th. „Interpolation of non-smooth functions on anisotropic finite element meshes“. Universitätsbibliothek Chemnitz, 1998. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-199801341.

Der volle Inhalt der Quelle
Annotation:
In this paper, several modifications of the quasi-interpolation operator of Scott and Zhang (Math. Comp. 54(1990)190, 483--493) are discussed. The modified operators are defined for non-smooth functions and are suited for the application on anisotropic meshes. The anisotropy of the elements is reflected in the local stability and approximation error estimates. As an application, an example is considered where anisotropic finite element meshes are appropriate, namely the Poisson problem in domains with edges.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Junqueira, Luiz Antonio Custódio Manganelli. „Estudo de suavizadores para o método Multigrid algébrico baseado em wavelet“. Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/3/3143/tde-18082008-141740/.

Der volle Inhalt der Quelle
Annotation:
Este trabalho consiste na análise do comportamento do método WAMG (Wavelet-Based Algebraic Multigrid), método numérico de resolução de sistemas de equações lineares desenvolvido no LMAG-Laboratório de Eletromagnetismo Aplicado, com relação a diversos suavizadores. O fato dos vetores que compõem os operadores matriciais Pronlongamento e Restrição do método WAMG serem ortonormais viabiliza uma série de análises teóricas e de dados experimentais, permitindo visualizar características não permitidas nos outros métodos Multigrid (MG), englobando o Multigrid Geométrico (GMG) e o Multigrid Algébrico (AMG). O método WAMG V-Cycle com Filtro Haar é testado em uma variedade de sistemas de equações lineares variando o suavizador, o coeficiente de relaxação nos suavizadores Damped Jacobi e Sobre Relaxação Sucessiva (SOR), e a configuração de pré e pós-suavização. Entre os suavizadores testados, estão os métodos iterativos estacionários Damped Jacobi, SOR, Esparsa Aproximada a Inversa tipo Diagonal (SPAI-0) e métodos propostos com a característica de suavização para-otimizada. A título de comparação, métodos iterativos não estacionários são testados também como suavizadores como Gradientes Conjugados, Gradientes Bi-Conjugados e ICCG. Os resultados dos testes são apresentados e comentados.
This work is comprised of WAMG (Wavelet-Based Algebraic Multigrid) method behavioral analysis based on variety of smoothers, numerical method based on linear equation systems resolution developed at LMAG (Applied Electromagnetism Laboratory). Based on the fact that the vectors represented by WAMG Prolongation and Restriction matrix operators are orthonormals allows the use of a variety of theoretical and practical analysis, and therefore gain visibility of characteristics not feasible through others Multigrid (MG) methods, such as Geometric Multigrid (GMG) and Algebraic Multigrid (AMG). WAMG V-Cycle method with Haar Filter is tested under a variety of linear equation systems, by varying smoothers, relaxation coefficient at Damped Jacobi and Successive Over Relaxation (SOR) smoothers, and pre and post smoothers configurations. The tested smoothers are stationary iterative methods such as Damped Jacobi, SOR, Diagonal type-Sparse Approximate Inverse (SPAI-0) and suggested ones with optimized smoothing characteristic. For comparison purposes, the Conjugate Gradients, Bi-Conjugate Gradient and ICCG non-stationary iterative methods are also tested as smoothers. The testing results are formally presented and commented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Armstrong, Michelle Hine, Tepole Adrián Buganza, Ellen Kuhl, Bruce R. Simon und Geest Jonathan P. Vande. „A Finite Element Model for Mixed Porohyperelasticity with Transport, Swelling, and Growth“. Public Library of Science, 2016. http://hdl.handle.net/10150/614631.

Der volle Inhalt der Quelle
Annotation:
The purpose of this manuscript is to establish a unified theory of porohyperelasticity with transport and growth and to demonstrate the capability of this theory using a finite element model developed in MATLAB. We combine the theories of volumetric growth and mixed porohyperelasticity with transport and swelling (MPHETS) to derive a new method that models growth of biological soft tissues. The conservation equations and constitutive equations are developed for both solid-only growth and solid/fluid growth. An axisymmetric finite element framework is introduced for the new theory of growing MPHETS (GMPHETS). To illustrate the capabilities of this model, several example finite element test problems are considered using model geometry and material parameters based on experimental data from a porcine coronary artery. Multiple growth laws are considered, including time-driven, concentrationdriven, and stress-driven growth. Time-driven growth is compared against an exact analytical solution to validate the model. For concentration-dependent growth, changing the diffusivity (representing a change in drug) fundamentally changes growth behavior. We further demonstrate that for stress-dependent, solid-only growth of an artery, growth of an MPHETS model results in a more uniform hoop stress than growth in a hyperelastic model for the same amount of growth time using the same growth law. This may have implications in the context of developing residual stresses in soft tissues under intraluminal pressure. To our knowledge, this manuscript provides the first full description of an MPHETS model with growth. The developed computational framework can be used in concert with novel in-vitro and in-vivo experimental approaches to identify the governing growth laws for various soft tissues.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Lewin, Susanne. „Biomechanics of Arterial Smooth Muscle. : - Analyzing vascular adaptation of large elastic arteries using in vitro experiments and 3D finite element modeling“. Thesis, KTH, Hållfasthetslära (Inst.), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-176406.

Der volle Inhalt der Quelle
Annotation:
A number of cardiovascular diseases (e.g. aortic aneurysm, aortic dissection and atherosclerosis) are associated with altered biomechanical properties in the vascular wall. This thesis studies vascular adaptation and its resulting alteration in biomechanical properties. Vascular adaptation refers to alterations in the vascular wall as a response to changes in its environment. This process can be examined through in vitro experiments on isolated blood vessels. Previous studies have shown that adaptation in arteries is induced by alterations in blood pressure. However, not much is known about the adaptation of smooth muscle cells and its resulting effects on the active tone. In order to verify previously obtained results on smooth muscle cell adaptation, in vitro experiments were conducted in this project. The in vitro experiments were conducted in a myograph on mice descending thoracic aorta. The experiments included a three-hour adaptation where the samples were contracted using an agonist at optimal stretch, or at a stretch lower than the optimal stretch. Concurring with the previous study, a decrease around 20% in active tone was observed after adaptation on low stretch. Based on the experimental data a constitutive framework was developed, which allows numerical studies on vascular adaptation of the active tone. Subsequently, the constitutive framework was implemented into the FEM software ABAQUS, and a thickwalled 3D artery was analyzed. By implementing the model in a FEM software, a platform for solving more complex boundary value problems has been created, and more challenging boundary conditions can be studied.
Många hjärt- och kärlsjukdomar (t.ex. aortaaneurysm, aortadissektion och ateroskleros) sammanfaller med förändrade biomekaniska egenskaper i kärlväggen. Detta examensarbet undersöker vaskulär adaptation, samt biomekaniska egenskaper som detta medför. Vaskulär adaptation avser anpassningar i blodkärlsväggen som orsakats av en förändring i  kärlväggens omgivning. Detta kan studeras genom in vitro-experiment på isolerade blodkärl. Tidigare studier har visat att adaptation i blodkärl uppkommer till följd av förändrat blodtryck. Studierna om adaptation behandlar oftast det passiva materialet i kärlväggen. Det är därför mindre känt hur de glatta muskelcellerna adapteras. Tidigare projekt har behandlat vaskulär adaptation och hur det påverkas av mekanisk stretch (Bakker m.fl. 2004 och Tuna m.fl. 2013). Murtada m.fl. (pågående) har visat att adaptation på låg stretch medför en minskning av den aktiva tonen, samt att den ökar vid hög stretch. För att testa tidigare resultat utfördes in vitroexperiment på aorta från möss. Data från experimenten användes för att anpassa en matematisk modell som modellerar beteendet av kärlväggen. Avslutningsvis implementerades det konstitutiva ramverket i FEM-mjukvaran ABAQUS.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Nhu, Viet-Hung. „Dialogues numériques entre échelles tribologiques“. Thesis, Lyon, INSA, 2013. http://www.theses.fr/2013ISAL0043/document.

Der volle Inhalt der Quelle
Annotation:
En tribologie, la modélisation numérique est aujourd'hui un outil indispensable pour étudier un contact afin de pallier les limites expérimentales. Pour comprendre de mieux en mieux les phénomènes mis en jeu, les modèles ne se situent plus à une seule échelle, mais en font intervenir plusieurs, rendant plus que jamais le concept de triplet tribologique incontournable. Travaillant avec cette philosophie et en se basant sur l'approche Non Smooth Contact Dynamics, dont nous rappelons les grandes lignes, nous proposons de franchir deux cas: proposer des modèles offrant des résultats quantitatifs et mettre en place les premières pièces d'une homogénéisation au niveau du contact (VER). Dans le premier cas, l'étude du couplage éléments finis/éléments discrets au sein d'une même simulation a pour but de proposer des modèles plus "réalistes". Même si l’interface utilisée est déjà présente au coeur du contact et ne va pas évoluer, elle permet de mettre en évidence l’utilisation d’outil de mesure permettant de lier le mouvement des particules aux instabilités dynamiques et permet d’avoir des résultats qualitatifs mais aussi quantitatifs puisque la comparaison avec les taux de contraintes expérimentaux sont en très bonne adéquation. Dans le second cas, le VER sous sollicitations tribologiques est étudié afin d'étendre les techniques d'homogénéisation aux problèmes de contact afin de s'affranchir de la description des interfaces aux grandes échelles en trouvant un moyen d'homogénéiser le comportement hétérogène de l'interface et de le faire dialoguer avec le comportement continu des corps en contact en faisant remonter, dans un sens, des grandeurs moyennées à l'échelle microscopique à l'échelle macroscopique des premiers corps et dans l'autre sens, se servir des données locales à l'échelle macroscopique comme conditions limites à l'échelle microscopique
In tribology, the numerical modeling has become an indispensable tool for studying a contact to overcome the experimental limitations. To have a better understanding of the phenomena involved, the models are no longer located at a single scale, but involve several ones, more than ever, making the concept of tribological triplet as a unavoidable concept. Working with this philosophy and approach based on the Non Smooth Contact Dynamics framework, which we remind some outlines, we propose to cross two steps~: model that can offer quantitative results and that implement the first ingredient to perform a homogenization at a contact level. In the first case, the study of coupling finite elements/discrete elements within the same simulation aims to propose models that are more "realistic". Even if the interface is already present in the contact and not going to evolves, it can highlight the use of measurement tool of spot particles via dynamic instabilities and allows to have not only qualitative results but also quantitative ones since the comparison with the experimental strain rates are in very good agreement. In the second case, the study of VER in tribological charges is performed to extend the homogenization techniques to contact problems in order to overcome the interface description on large scales by finding a way to homogenize the heterogeneous behavior of the interface and make a dialogue with the continue behavior of bodies in contact by send up, in a sense, average values of the microscopic scale to the macroscopic scale and in the other sense, use local data of the macroscopic scale as boundary conditions at the microscopic scale
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Schmidt, Martin-Pierre. „Computational generation and optimization of mechanical structures On structural topology optimization using graded porosity control Structural topology optimization with smoothly varying fiber orientations“. Thesis, Normandie, 2020. http://www.theses.fr/2020NORMIR01.

Der volle Inhalt der Quelle
Annotation:
Cette thèse étudie et développe des méthodes de modélisation mathématique, analyse et optimisation numérique appliquées à la génération d’objets 3D. Les approches proposées sont utilisées pour la génération de structures lattices et de structure continue par optimisation topologique
This thesis studies and develops methods for mathematical modeling, numerical analysis and optimization applied to the generation of 3D objects. The proposed approaches are used to generate lattice structures and continuum structures with topology optimization
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Rees, Glyn Owen. „Efficient "black-box" multigrid solvers for convection-dominated problems“. Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/efficient-blackbox-multigrid-solvers-for-convectiondominated-problems(d49ec3ea-1dc2-4238-b0c1-0688e5944ddd).html.

Der volle Inhalt der Quelle
Annotation:
The main objective of this project is to develop a "black-box" multigrid preconditioner for the iterative solution of finite element discretisations of the convection-diffusion equation with dominant convection. This equation can be considered a stand alone scalar problem or as part of a more complex system of partial differential equations, such as the Navier-Stokes equations. The project will focus on the stand alone scalar problem. Multigrid is considered an optimal preconditioner for scalar elliptic problems. This strategy can also be used for convection-diffusion problems, however an appropriate robust smoother needs to be developed to achieve mesh-independent convergence. The focus of the thesis is on the development of such a smoother. In this context a novel smoother is developed referred to as truncated incomplete factorisation (tILU) smoother. In terms of computational complexity and memory requirements, the smoother is considerably less expensive than the standard ILU(0) smoother. At the same time, it exhibits the same robustness as ILU(0) with respect to the problem and discretisation parameters. The new smoother significantly outperforms the standard damped Jacobi smoother and is a competitor to the Gauss-Seidel smoother (and in a number of important cases tILU outperforms the Gauss-Seidel smoother). The new smoother depends on a single parameter (the truncation ratio). The project obtains a default value for this parameter and demonstrated the robust performance of the smoother on a broad range of problems. Therefore, the new smoothing method can be regarded as "black-box". Furthermore, the new smoother does not require any particular ordering of the nodes, which is a prerequisite for many robust smoothers developed for convection-dominated convection-diffusion problems. To test the effectiveness of the preconditioning methodology, we consider a number of model problems (in both 2D and 3D) including uniform and complex (recirculating) convection fields discretised by uniform, stretched and adaptively refined grids. The new multigrid preconditioner within block preconditioning of the Navier-Stokes equations was also tested. The numerical results gained during the investigation confirm that tILU is a scalable, robust smoother for both geometric and algebraic multigrid. Also, comprehensive tests show that the tILU smoother is a competitive method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Thess, M. „Parallel Multilevel Preconditioners for Problems of Thin Smooth Shells“. Universitätsbibliothek Chemnitz, 1998. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-199801416.

Der volle Inhalt der Quelle
Annotation:
In the last years multilevel preconditioners like BPX became more and more popular for solving second-order elliptic finite element discretizations by iterative methods. P. Oswald has adapted these methods for discretizations of the fourth order biharmonic problem by rectangular conforming Bogner-Fox-Schmidt elements and nonconforming Adini elements and has derived optimal estimates for the condition numbers of the preconditioned linear systems. In this paper we generalize the results from Oswald to the construction of BPX and Multilevel Diagonal Scaling (MDS-BPX) preconditioners for the elasticity problem of thin smooth shells of arbitrary forms where we use Koiter's equations of equilibrium for an homogeneous and isotropic thin shell, clamped on a part of its boundary and loaded by a resultant on its middle surface. We use the two discretizations mentioned above and the preconditioned conjugate gradient method as iterative method. The parallelization concept is based on a non-overlapping domain decomposition data structure. We describe the implementations of the multilevel preconditioners. Finally, we show numerical results for some classes of shells like plates, cylinders, and hyperboloids.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Wunderlich, Linus Maximilian [Verfasser], Barbara [Akademischer Betreuer] [Gutachter] Wohlmuth, Alessandro [Gutachter] Reali und Olaf [Gutachter] Steinbach. „Hybrid Finite Element Methods for Non-linear and Non-smooth Problems in Solid Mechanics / Linus Maximilian Wunderlich ; Gutachter: Alessandro Reali, Barbara Wohlmuth, Olaf Steinbach ; Betreuer: Barbara Wohlmuth“. München : Universitätsbibliothek der TU München, 2017. http://d-nb.info/1147565961/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Silva, Everton da. „Uma formulação de otimização topológica com restrição de tensão suavizada“. reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2012. http://hdl.handle.net/10183/61396.

Der volle Inhalt der Quelle
Annotation:
No presente trabalho, foi implementada uma formulação de otimização topológica com o objetivo de encontrar o mínimo volume de estruturas contínuas bidimensionais, em estado plano de tensão, sujeitas à restrição de tensão de von Mises. Foi utilizado o Método dos Elementos Finitos para discretizar o domínio, com o elemento não conforme de Taylor. A tensão foi suavizada, calculando-se um valor de tensão para cada nó do elemento. O fenômeno da singularidade foi contornado através do método de relaxação da tensão, penalizando-se o tensor constitutivo. Foi usada uma única medida de tensão global, a normap, resultando na redução do custo computacional do cálculo das sensibilidades. As sensibilidades da função objetivo e da restrição de tensão foram calculadas analiticamente. O problema de otimização topológica foi resolvido por um algoritmo de Programação Linear Sequencial. Os fenômenos da instabilidade de tabuleiro e da dependência da malha foram contornados pela utilização de um filtro de densidade linear. A formulação desenvolvida foi testada em 3 casos clássicos. No primeiro deles, foi testada uma viga curta em balanço, submetida a 3 diferentes tipos de penalização da função objetivo, obtendo-se uma estrutura com 27% do volume inicial, com reduzido número de elementos com densidades intermediárias. No segundo caso, foi testada a mesma estrutura submetida à flexão, chegandose a uma topologia bem definida no formato de duas barras, com 16,25% do volume inicial. No terceiro caso, em que foi utilizado um componente estrutural em formato de “L”, justamente por favorecer o surgimento de concentração de tensão em sua quina interna, o otimizador gerou uma estrutura bem definida, permanecendo, contudo, uma pequena região de concentração de tensão na topologia final.
A topology optimization formulation to search for the minimum volume of twodimensional linear elastic continuous structures in plane stress, subject to a von Mises stress constraint, was implemented in this study. The extended domain was discretized using Taylor nonconforming finite element. Nodal values of the stress tensor field were computed by global smoothing. A penalized constitutive tensor stress relaxation method bypassed the stress singularity problem. A single p-norm global stress measure was used to speed up the sensitivity analysis. The sensitivities of the objective function and stress constraints were derived analytically. The topology optimization problem was solved by a Sequential Linear Programming algorithm. A linear density filter avoided the checkerboard and the mesh dependence phenomena. The formulation was tested with three benchmark cases. In the first case, a tip loaded short cantilever beam was optimized using a sequence of three different objective function penalizations. The converged design had approximately 27% of the initial volume, with a small proportion of intermediate densities areas. In the second case, the same domain was subjected to shear, resulting a well defined two-bar design, with 16.25% of the initial volume. In the third case, an L-shape structure was studied, because it has a stress concentration at the reentrant corner. In this last case, the final topology was well-defined, but the stress concentration was not completely removed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Fabrèges, Benoit. „Une méthode de prolongement régulier pour la simulation d'écoulements fluide/particules“. Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00763895.

Der volle Inhalt der Quelle
Annotation:
Nous étudions dans ce travail une méthode de type éléments finis dans le but de simuler le mouvement de particules rigides immergées. La méthode développée ici est une méthode de type domaine fictif. L'idée est de chercher un prolongement régulier de la solution exacte à tout le domaine fictif afin d'obtenir une solution régulière sur tout le domaine et retrouver l'ordre optimal de l'erreur avec des éléments d'ordre 1. Le prolongement régulier est cherché en minimisant une fonctionnelle dont le gradient est donné par la solution d'un nouveau problème fluide faisant intervenir une distribution simple couche dans le second membre. Nous faisons une analyse numérique, dans le cas scalaire, de l'approximation de cette distribution par une combinaison de masse de Dirac. Un des avantages de cette méthode est de pouvoir utiliser des solveurs rapides sur maillages cartésiens tout en conservant l'ordre optimal de l'erreur. Un autre avantage de la méthode vient du fait que les opérateurs ne sont pas modifiés, seul les seconds membres dépendent de la géométrie du domaine initial. Nous avons de plus écrit un code C++ parallèle en deux et trois dimensions, permettant de simuler des écoulements fluide/particules rigides avec cette méthode. Nous présentons ainsi une description des principales composantes de ce code.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Robin, Martin. „Contribution à l'étude de l'adhérence des structures du type couche sur substrat par modes de Rayleigh générés et détectés par sources laser“. Thesis, Valenciennes, 2019. http://www.theses.fr/2019VALE0015/document.

Der volle Inhalt der Quelle
Annotation:
La caractérisation non destructive de l’adhérence des structures du type couche sur substrat est un enjeu industriel et académique important. Ce type d’échantillon est en effet utilisé pour de nombreuses applications et sa durée de vie dépend en grande partie de la qualité d’adhérence des films au substrat. Celle-ci modifie sensiblement le comportement dispersif des ondes acoustiques de surface se propageant dans de ce type de structure. Pour générer et détecter ces ondes, un dispositif Ultrasons-Laser a été privilégié. Dans un premier temps, nous avons cherché à contourner les difficultés d’interprétation rencontrées habituellement dans le contrôle de l’adhérence par ondes acoustiques de surface. Les variations d’épaisseur de la couche peuvent en effet avoir une influence sur la dispersion des ondes comparable à celle due à l’adhérence. Pour ce faire, des films polymères dont l’épaisseur est quasi-constante sont employés et apposés sur un substrat en aluminium. Ces films possèdent en plus la propriété d’être transparents. Cela permet de focaliser l’impulsion laser générant les ondes acoustiques à travers le film, directement à la surface du substrat et de placer ainsi la source acoustique à l’interface film-substrat. L’influence de la position de la source sur le comportement dispersif des ondes acoustiques de surface et par conséquent sur le contrôle de la qualité d’adhérence est alors étudiée expérimentalement ainsi qu’au travers de simulations par éléments finis. Finalement, une caractérisation de l’adhérence de différents échantillons est effectuée grâce aux courbes de dispersion obtenues à l’aide de la méthode Matrix-Pencil appliquée aux résultats expérimentaux. En utilisant un algorithme d’inversion, les raideurs d’interface caractéristiques de l’adhérence des échantillons analysés sont estimées
The non-destructive characterization of the adhesion of layer-on-substrate structures is an important issue in industrial and academic domains. This type of sample is indeed used for many applications and its lifetime depends mainly on the adhesion of the film to the substrate. This one changes significantly the dispersive behavior of the surface acoustic waves. To generate and detect these waves, a Laser-Ultrasonics setup has been used. First, we are looking to bypass the interpretation difficulties usually encountered in the control of adhesion by surface acoustic waves. Indeed, the layer thickness variations influence the dispersion of the waves in a similar way to the adhesion. Consequently, the polymer films used have a quasi-constant thickness and they are deposited directly on an aluminum substrate. In addition, these films are also transparent. It allows us to generate directly the acoustic waves on the substrate surface, at the interface between the film and the substrate, by focusing the laser pulse through the film. In this way, the influence of the source location on the dispersive behavior of the surface acoustic waves and thus on the adhesion quality control may be studied experimentally and by using finite element simulations. Finally, a characterization of the adhesion of several samples is performed using the dispersion curves obtained applying the Matrix-Pencil method to the experimental results. An inversion algorithm allows us to estimate the interfacial stiffnesses corresponding to the adhesion of the samples
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Nhu, Viet Hung. „Dialogues numériques entre échelles tribologiques“. Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00876855.

Der volle Inhalt der Quelle
Annotation:
En tribologie, la modélisation numérique est aujourd'hui un outil indispensable pour étudier un contact afin de pallier les limites expérimentales. Pour comprendre de mieux en mieux les phénomènes mis en jeu, les modèles ne se situent plus à une seule échelle, mais en font intervenir plusieurs, rendant plus que jamais le concept de triplet tribologique incontournable. Travaillant avec cette philosophie et en se basant sur l'approche Non Smooth Contact Dynamics, dont nous rappelons les grandes lignes, nous proposons de franchir deux cas: proposer des modèles offrant des résultats quantitatifs et mettre en place les premières pièces d'une homogénéisation au niveau du contact (VER). Dans le premier cas, l'étude du couplage éléments finis/éléments discrets au sein d'une même simulation a pour but de proposer des modèles plus "réalistes". Même si l'interface utilisée est déjà présente au coeur du contact et ne va pas évoluer, elle permet de mettre en évidence l'utilisation d'outil de mesure permettant de lier le mouvement des particules aux instabilités dynamiques et permet d'avoir des résultats qualitatifs mais aussi quantitatifs puisque la comparaison avec les taux de contraintes expérimentaux sont en très bonne adéquation. Dans le second cas, le VER sous sollicitations tribologiques est étudié afin d'étendre les techniques d'homogénéisation aux problèmes de contact afin de s'affranchir de la description des interfaces aux grandes échelles en trouvant un moyen d'homogénéiser le comportement hétérogène de l'interface et de le faire dialoguer avec le comportement continu des corps en contact en faisant remonter, dans un sens, des grandeurs moyennées à l'échelle microscopique à l'échelle macroscopique des premiers corps et dans l'autre sens, se servir des données locales à l'échelle macroscopique comme conditions limites à l'échelle microscopique.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Martin, Sylvain. „Contribution à la modélisation du frittage en phase solide“. Thesis, Compiègne, 2014. http://www.theses.fr/2014COMP2144/document.

Der volle Inhalt der Quelle
Annotation:
Cette thèse traite de la modélisation du frittage à l’échelle du Volume Élémentaire Représentatif de la pastille de matériau. L’objectif est de développer des outils numériquesde compréhension des phénomènes physiques mis en jeu lors du frittage. Le domaine d’application ciblé est la fabrication du combustible nucléaire. Une approche multi-Échelle a été mise en oeuvre. Dans un premier temps une modélisation à l’échelle d’un empilement, basée sur la méthode des Éléments Discrets, a été adoptée. Différentes études utilisant cette approche ont été proposées dans la littérature ces dernières années. Tous ces travaux utilisent une méthode discrète explicite. Si certains résultats ont pu être validés expérimentalement,une des limites vient de l’utilisation des méthodes explicites dontle pas de temps critique est très petit. Afin d’augmenter le pas de temps, la masse des particules y est augmentée artificiellement de plusieurs ordres de grandeur. Or,il a été démontré que cette pratique conduit, dans certains cas, à une diminution du réarrangement des particules au sein de l’empilement. Dans cette thèse, une méthode Éléments Discrets implicite appelée Dynamique des Contacts a été adaptée au frittage. Elle permet l’utilisation d’un pas de temps très supérieur à celui des méthodes discrètes explicites et ne nécessite pas d’augmenter artificiellement la massedes particules. La comparaison entre la Dynamique des Contacts et la Méthode des Éléments Discrets explicite montre que notre approche conduit à une représentation plus fidèle du réarrangement. Une validation expérimentale par Microtomographie X ainsi qu’une étude paramétrique sur le frittage des poudres bidispersés sont également présentées pour montrer les possibilités de l’approche discrète appliquée au frittage.La seconde partie est consacrée à une modélisation à l’échelle de deux particules parla méthode des Éléments Finis. Ce modèle repose sur une approche mécanique et vise à représenter de façon plus précise le comportement de deux particules en contact. Les diffusions au joint de grains, en surface et en volume peuvent être représentées. Pour le moment, seules les diffusions en surface et au joint de grains ont été étudiées. Si certaines optimisations restent nécessaires pour que le code soit fonctionnel, plusieurs aspects apparaissent déjà déterminants, comme la courbure de la surface à proximité du joint de grains. A l’avenir, le modèle Dynamique des Contacts du frittage pourra être complété etamélioré grâce aux éléments apportés par le modèle mécanique à l’échelle du grain
This thesis deals with the simulation of the sintering of nuclear fuel on a pellet scale. The goal is to develop numerical tools which can contribute to a better understandingof the physical phenomena involved in the sintering process. Hence, a multi scale approach is proposed. First of all, a Discrete Element model is introduced. It aims at modeling the motion of particles on a Representative Elementary Volume scale using an original Discrete Element Method. The latter is a Non Smooth Method called Contact Dynamics. Recently, there have been numerous papers about the simulation of sintering using Discrete Element Method. As far as we know, all these papers use smooth methods. Different studies show that the results match well experimental data. However, some limits come from the fact that smooth methods use an explicit scheme which needsvery small time steps. In order to obtain an acceptable time step, the mass of particles have to be dramatically increased. The Non Smooth Contact Dynamics uses an implicit scheme, thus time steps can be much larger without scaling up the mass of particles. The comparison between smooth and non smooth approaches shows thatour method leads to a more realistic representation of rearrangement. An experimental validation using synchrotron X-Ray microtomography is then presented, followedby a parametric study on the sintering of bimodal powders that aims at showing the capacity of this model.The second part presents a mechanical model on the sub-Granular scale, using a Finite Element method. This targets a better understanding of the behavior of twograins in contact. The model is currently being developped but the first results already show that some parameters like the shape of the surface of the neck are very sensitive.In the future, the Non smooth Contact Dynamics model of sintering may be improvedusing the results obtained by the sub-Granular scale mechanical model
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Rasquin, Michel. „Numerical tools for the large eddy simulation of incompressible turbulent flows and application to flows over re-entry capsules“. Doctoral thesis, Universite Libre de Bruxelles, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210118.

Der volle Inhalt der Quelle
Annotation:
The context of this thesis is the numerical simulation of turbulent flows at moderate Reynolds numbers and the improvement of the capabilities of an in-house 3D unsteady and incompressible flow solver called SFELES to simulate such flows.

In addition to this abstract, this thesis includes five other chapters.

The second chapter of this thesis presents the numerical methods implemented in the two CFD solvers used as part of this work, namely SFELES and PHASTA.

The third chapter concentrates on the implementation of a new library called FlexMG. This library allows the use of various types of iterative solvers preconditioned by algebraic multigrid methods, which require much less memory to solve linear systems than a direct sparse LU solver available in SFELES. Multigrid is an iterative procedure that relies on a series of increasingly coarser approximations of the original 'fine' problem. The underlying concept is the following: low wavenumber errors on fine grids become high wavenumber errors on coarser levels, which can be effectively removed by applying fixed-point methods on coarser levels.

Two families of algebraic multigrid preconditioners have been implemented in FlexMG, namely smooth aggregation-type and non-nested finite element-type. Unlike pure gridless multigrid, both of these families use the information contained in the initial fine mesh. A hierarchy of coarse meshes is also needed for the non-nested finite element-type multigrid so that our approaches can be considered as hybrid. Our aggregation-type multigrid is smoothed with either a constant or a linear least square fitting function, whereas the non-nested finite element-type multigrid is already smooth by construction. All these multigrid preconditioners are tested as stand-alone solvers or coupled with a GMRES (Generalized Minimal RESidual) method. After analyzing the accuracy of the solutions obtained with our solvers on a typical test case in fluid mechanics (unsteady flow past a circular cylinder at low Reynolds number), their performance in terms of convergence rate, computational speed and memory consumption is compared with the performance of a direct sparse LU solver as a reference. Finally, the importance of using smooth interpolation operators is also underlined in this work.

The fourth chapter is devoted to the study of subgrid scale models for the large eddy simulation (LES) of turbulent flows.

It is well known that turbulence features a cascade process by which kinetic energy is transferred from the large turbulent scales to the smaller ones. Below a certain size, the smallest structures are dissipated into heat because of the effect of the viscous term in the Navier-Stokes equations.

In the classical formulation of LES models, all the resolved scales are used to model the contribution of the unresolved scales. However, most of the energy exchanges between scales are local, which means that the energy of the unresolved scales derives mainly from the energy of the small resolved scales.

In this fourth chapter, constant-coefficient-based Smagorinsky and WALE models are considered under different formulations. This includes a classical version of both the Smagorinsky and WALE models and several scale-separation formulations, where the resolved velocity field is filtered in order to separate the small turbulent scales from the large ones. From this separation of turbulent scales, the strain rate tensor and/or the eddy viscosity of the subgrid scale model is computed from the small resolved scales only. One important advantage of these scale-separation models is that the dissipation they introduce through their subgrid scale stress tensor is better controlled compared to their classical version, where all the scales are taken into account without any filtering. More precisely, the filtering operator (based on a top hat filter in this work) allows the decomposition u' = u - ubar, where u is the resolved velocity field (large and small resolved scales), ubar is the filtered velocity field (large resolved scales) and u' is the small resolved scales field.

At last, two variational multiscale (VMS) methods are also considered.

The philosophy of the variational multiscale methods differs significantly from the philosophy of the scale-separation models. Concretely, the discrete Navier-Stokes equations have to be projected into two disjoint spaces so that a set of equations characterizes the evolution of the large resolved scales of the flow, whereas another set governs the small resolved scales.

Once the Navier-Stokes equations have been projected into these two spaces associated with the large and small scales respectively, the variational multiscale method consists in adding an eddy viscosity model to the small scales equations only, leaving the large scales equations unchanged. This projection is obvious in the case of a full spectral discretization of the Navier-Stokes equations, where the evolution of the large and small scales is governed by the equations associated with the low and high wavenumber modes respectively. This projection is more complex to achieve in the context of a finite element discretization.

For that purpose, two variational multiscale concepts are examined in this work.

The first projector is based on the construction of aggregates, whereas the second projector relies on the implementation of hierarchical linear basis functions.

In order to gain some experience in the field of LES modeling, some of the above-mentioned models were implemented first in another code called PHASTA and presented along with SFELES in the second chapter.

Finally, the relevance of our models is assessed with the large eddy simulation of a fully developed turbulent channel flow at a low Reynolds number under statistical equilibrium. In addition to the analysis of the mean eddy viscosity computed for all our LES models, comparisons in terms of shear stress, root mean square velocity fluctuation and mean velocity are performed with a fully resolved direct numerical simulation as a reference.

The fifth chapter of the thesis focuses on the numerical simulation of the 3D turbulent flow over a re-entry Apollo-type capsule at low speed with SFELES. The Reynolds number based on the heat shield is set to Re=10^4 and the angle of attack is set to 180º, that is the heat shield facing the free stream. Only the final stage of the flight is considered in this work, before the splashdown or the landing, so that the incompressibility hypothesis in SFELES is still valid.

Two LES models are considered in this chapter, namely a classical and a scale-separation version of the WALE model. Although the capsule geometry is axisymmetric, the flow field in its wake is not and induces unsteady forces and moments acting on the capsule. The characterization of the phenomena occurring in the wake of the capsule and the determination of their main frequencies are essential to ensure the static and dynamic stability during the final stage of the flight.

Visualizations by means of 3D isosurfaces and 2D slices of the Q-criterion and the vorticity field confirm the presence of a large meandering recirculation zone characterized by a low Strouhal number, that is St≈0.15.

Due to the detachment of the flow at the shoulder of the capsule, a resulting annular shear layer appears. This shear layer is then affected by some Kelvin-Helmholtz instabilities and ends up rolling up, leading to the formation of vortex rings characterized by a high frequency. This vortex shedding depends on the Reynolds number so that a Strouhal number St≈3 is detected at Re=10^4.

Finally, the analysis of the force and moment coefficients reveals the existence of a lateral force perpendicular to the streamwise direction in the case of the scale-separation WALE model, which suggests that the wake of the capsule may have some

preferential orientations during the vortex shedding. In the case of the classical version of the WALE model, no lateral force has been observed so far so that the mean flow is thought to be still axisymmetric after 100 units of non-dimensional physical time.

Finally, the last chapter of this work recalls the main conclusions drawn from the previous chapters.
Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Chiang, Jung-Lung, und 江榮龍. „Use Node-Based Smoothed Finite Element Method to study on the Beam“. Thesis, 2017. http://ndltd.ncl.edu.tw/handle/tvx5d9.

Der volle Inhalt der Quelle
Annotation:
碩士
國立臺灣科技大學
營建工程系
105
The analysis software is widely used and could solved various problems according to the different boundary conditions, material properties and environment in the engineering field. However, in order to get the real solution more accurately for the complex problems caused. The smoothed finite element method is discussed and compared with the finite element method and hand-calculated solutions. The solutions are compared and discussed. In this study, Fortran programs was used to simulate the cantilever beam in the axial force, concentrated force, compared the traditional hand algorithm and computer analysis. The results of this study show that the displacements, strains, stresses and strain energy are approximated well.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Li, Zong-Han, und 李宗翰. „Use Smoothed Finite Element Method to study on the α Value of the Axial Force Bar“. Thesis, 2016. http://ndltd.ncl.edu.tw/handle/3wxb5k.

Der volle Inhalt der Quelle
Annotation:
碩士
國立臺灣科技大學
營建工程系
105
The analysis software is widely used in the engineering field, and can solve various problems according to the different environment, material properties and boundary conditions. However, in order to obtain the true solution more accurately, the displacement value calculated by the finite element method is larger than the real solution. The S-FEM method is discussed and compared with the finite element method. The solution between them is balanced by the scaling factor α, and the α value is discussed. In this study, Fortran analysis software was used to simulate the axial force of the rod in the concentrated force, uniform load and exponential load, consider the linear elastic material, compared the traditional hand algorithm and computer analysis of differences. The results of this study show that the concentrated force and uniform load under a relatively simple force, the finite element method for the real solution, α value does not require adjustment, and the more complex exponential function of the finite element method e is more rigid The use of α-FEM depends on the subject, and the value of α depends on the consideration of the scholars, such as stress and displacement. Different considerations, for the ration of α are also different.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Haikal, Ghadir. „A stabilized Finite Element formulation of non-smooth contact /“. 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3362916.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2009.
Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3667. Adviser: Keith D. Hjelmstad. Includes bibliographical references (leaves 113-120) Available on microfilm from Pro Quest Information and Learning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Khatri, Vikash. „A Smooth Finite Element Method Via Triangular B-Splines“. Thesis, 2009. http://etd.iisc.ernet.in/handle/2005/2155.

Der volle Inhalt der Quelle
Annotation:
A triangular B-spline (DMS-spline)-based finite element method (TBS-FEM) is proposed along with possible enrichment through discontinuous Galerkin, continuous-discontinuous Galerkin finite element (CDGFE) and stabilization techniques. The developed schemes are also numerically explored, to a limited extent, for weak discretizations of a few second order partial differential equations (PDEs) of interest in solid mechanics. The presently employed functional approximation has both affine invariance and convex hull properties. In contrast to the Lagrangian basis functions used with the conventional finite element method, basis functions derived through n-th order triangular B-splines possess (n ≥ 1) global continuity. This is usually not possible with standard finite element formulations. Thus, though constructed within a mesh-based framework, the basis functions are globally smooth (even across the element boundaries). Since these globally smooth basis functions are used in modeling response, one can expect a reduction in the number of elements in the discretization which in turn reduces number of degrees of freedom and consequently the computational cost. In the present work that aims at laying out the basic foundation of the method, we consider only linear triangular B-splines. The resulting formulation thus provides only a continuous approximation functions for the targeted variables. This leads to a straightforward implementation without a digression into the issue of knot selection, whose resolution is required for implementing the method with higher order triangular B-splines. Since we consider only n = 1, the formulation also makes use of the discontinuous Galerkin method that weakly enforces the continuity of first derivatives through stabilizing terms on the interior boundaries. Stabilization enhances the numerical stability without sacrificing accuracy by suitably changing the weak formulation. Weighted residual terms are added to the variational equation, which involve a mesh-dependent stabilization parameter. The advantage of the resulting scheme over a more traditional mixed approach and least square finite element is that the introduction of additional unknowns and related difficulties can be avoided. For assessing the numerical performance of the method, we consider Navier’s equations of elasticity, especially the case of nearly-incompressible elasticity (i.e. as the limit of volumetric locking approaches). Limited comparisons with results via finite element techniques based on constant-strain triangles help bring out the advantages of the proposed scheme to an extent.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Narayan, Shashi. „Smooth Finite Element Methods with Polynomial Reproducing Shape Functions“. Thesis, 2013. http://etd.iisc.ernet.in/2005/3332.

Der volle Inhalt der Quelle
Annotation:
A couple of discretization schemes, based on an FE-like tessellation of the domain and polynomial reproducing, globally smooth shape functions, are considered and numerically explored to a limited extent. The first one among these is an existing scheme, the smooth DMS-FEM, that employs Delaunay triangulation or tetrahedralization (as approximate) towards discretizing the domain geometry employs triangular (tetrahedral) B-splines as kernel functions en route to the construction of polynomial reproducing functional approximations. In order to verify the numerical accuracy of the smooth DMS-FEM vis-à-vis the conventional FEM, a Mindlin-Reissner plate bending problem is numerically solved. Thanks to the higher order continuity in the functional approximant and the consequent removal of the jump terms in the weak form across inter-triangular boundaries, the numerical accuracy via the DMS-FEM approximation is observed to be higher than that corresponding to the conventional FEM. This advantage notwithstanding, evaluations of DMS-FEM based shape functions encounter singularity issues on the triangle vertices as well as over the element edges. This shortcoming is presently overcome through a new proposal that replaces the triangular B-splines by simplex splines, constructed over polygonal domains, as the kernel functions in the polynomial reproduction scheme. Following a detailed presentation of the issues related to its computational implementation, the new method is numerically explored with the results attesting to a higher attainable numerical accuracy in comparison with the DMS-FEM.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Zaeem, Mohammed Rizwan H. „Nonlinear Finite Element Analysis of the Black River Bridge - A Serviceability Study“. Thesis, 2013. http://hdl.handle.net/1807/43367.

Der volle Inhalt der Quelle
Annotation:
An attempt was made to predict the service life of the Black River Bridge using non-linear finite element analysis (NLFEA). Numerical modeling was performed using NLFEA software developed by Prof. Evan Bentz. A large number of analytical studies were conducted to assess the strength and behaviour of the bridge under normal truck loading and at failure loads. It was determined that the bridge is shear critical. Location of trucks that would cause maximum deflection and highest crack widths were identified. It is believed that these findings will have a significant impact on physical measurements that can be incorporated into future bridges, helping researchers determine the locations in the bridge that are ideal for instrumentation. Axial compression present in the bridge can significantly affect deflection and crack widths. Incorporating thermal and shrinkage effects into the NLFEA are recommended as topics for further research. Appropriate estimate of thermal and shrinkage strain will aid in better prediction of axial stresses.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Bhowmik, Krishnendu. „Experimental And Finite Element Study Of Elastic-Plastic Indentation Of Rough Surfaces“. Thesis, 2007. http://hdl.handle.net/2005/540.

Der volle Inhalt der Quelle
Annotation:
Most of the surfaces have roughness down to atomic scales. When two surfaces come into contact, the nature of the roughness determines the properties like friction and wear. Analysis of the rough surface contacts is always complicated by the interaction between the material size effects and the micro-geometry. Contact mechanics could be simplified by decoupling these two effects by magnifying the scale of roughness profile. Also, tailoring the roughness at different scale could show a way to control the friction and wear through surface micro-structure modifications. In this work, the mechanics of contact between a rigid, hard sphere and a surface with a well defined roughness profile is studied through experiments and finite element simulation. The well defined roughness profile is made up of a regular array of pyramidal asperities. This choice of this geometry was mainly dictated by the fabrication processes. The specimens were made out of an aluminium alloy (6351-T6) such that there could be a direct application of the results in controlling the tribological properties during aluminium forming. Experiments on the pyramidal aluminium surface is carried out in a 250 kN Universal Testing Machine (INSTRON 8502 system) using a depth sensing indentation setup. A strain gauge based load cell is used to measure the force of the indentation and a LVDT (Linear Variable Differential Transformer) is used to measure the penetration depth. The load and the displacement were continuously recorded using a data acquisition system. A 3-D finite element framework for studying the elastic-plastic contact of the rough surfaces has been developed with the commercial package (ABAQUS). Systematic studies of indentation were carried out in order to validate the simulations with the experimental observations. The simulation of indentation of flat surface is carried out using the implicit/standard (Backward Euler) procedure, whereas, the explicit finite element method (Forward Euler) is used for simulating rough surface indentation. It is found that the load versus displacement curves obtained from experiments match well with the finite element results (except for the error involved in determining the initial contact point). At indentation depths higher than a value that is determined mainly by the asperity height, the load-displacement characteristics are similar to that pertaining to indentation of a flat, smooth surface. From the finite element results, it is found that at this point, the elastic-plastic boundary is more or less hemispherical as in the case of smooth surface indentation. For certain geometries, it is found that there could exist an elastic island in the sub-surface surrounded by plastically deformed material. This could have interesting applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Rim, Nae Gyune. „Micropatterned cell sheets as structural building blocks for biomimetic vascular patch application“. Thesis, 2018. https://hdl.handle.net/2144/30707.

Der volle Inhalt der Quelle
Annotation:
To successfully develop a functional tissue-engineered vascular patch, recapitulating the hierarchical structure of vessel is critical to mimic mechanical properties. Here, we use a cell sheet engineering strategy with micropatterning technique to control structural organization of bovine aortic vascular smooth muscle cell (VSMC) sheets. Actin filament staining and image analysis showed clear cellular alignment of VSMC sheets cultured on patterned substrates. Viability of harvested VSMC sheets was confirmed by Live/Dead® cell viability assay after 24 and 48 hours of transfer. VSMC sheets stacked to generate bilayer VSMC patches exhibited strong inter-layer bonding as shown by lap shear test. Uniaxial tensile testing of monolayer VSMC sheets and bilayer VSMC patches displayed nonlinear, anisotropic stress-stretch response similar to the biomechanical characteristic of a native arterial wall. Collagen content and structure were characterized to determine the effects of patterning and stacking on extracellular matrix of VSMC sheets. Using finite-element modeling to simulate uniaxial tensile testing of bilayer VSMC patches, we found the stress-stretch response of bilayer patterned VSMC patches under uniaxial tension to be predicted using an anisotropic hyperelastic constitutive model. Thus, our cell sheet harvesting system combined with biomechanical modeling is a promising approach to generate building blocks for tissue-engineered vascular patches with structure and mechanical behavior mimicking native tissue.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Harvey, Brian Christopher. „Mechanical determinants of intact airway responsiveness“. Thesis, 2015. https://hdl.handle.net/2144/13679.

Der volle Inhalt der Quelle
Annotation:
Airway hyperresponsiveness (AHR) is a hallmark of asthma where constriction of airway smooth muscle (ASM) causes excessive airway narrowing. Asthmatics, unlike healthy subjects, cannot prevent or reverse this narrowing by stretching their airways with a deep inspiration (DI). Since stretching of isolated ASM causes dramatic reductions in force generation and asthmatics tend to have stiffer airways, researchers hypothesize that reduced ASM stretching during breathing and DIs results in hyperreactive airways. However, counterintuitively, excised measurement on intact airways show narrowing is minimally reversed by pressure oscillations simulating breathing and DIs. We hypothesized that AHR does not result from reduced capacity to stretch the airways; furthermore, each constituent of the airway wall experiences different strain magnitude during breathing and DIs. To test this, we used an intact airway system which controls transmural pressure (Ptm) to simulate breathing while measuring luminal diameter in response to ASM agonists. An ultrasound system and automated segmentation algorithm were implemented to quantify and compare the ability of Ptm fluctuations to reverse and prevent narrowing in larger (diameter=5.72±0.52mm) relative to smaller airways (diameter=2.92±0.29mm). We found the ability of Ptm oscillations to reverse airway narrowing was proportional to strain imposed on the airway wall. Further, tidal-like breathing Ptm oscillations (5-15cmH2O) after constriction imposed 196% more strain in smaller compared to larger airways (14.6% vs. 5.58%), resulting in 76% greater reversal of narrowing (41.2% vs. 23.4%). However, Ptm oscillations applied before and during constriction resulted in the same steady-state diameter as when Ptm oscillations were applied only after constriction. To better understand these results, we optimized an ultrasound elastography technique utilizing finite element-based image registration to estimate spatial distributions of displacements, strains, and material properties throughout an airway wall during breathing and bronchoconstriction. This required we formulate and solve an inverse elasticity problem to reconstruct the distribution of nonlinear material properties. Strains and material properties were radially and longitudinally heterogeneous, and patterns and magnitudes changed significantly after induced narrowing. Taken together, these data show AHR likely does not emerge due to reduced straining of airways prior to challenge, but remodeling that stiffens airway walls might serve to sustain constriction during an asthmatic-like attack.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Bartoš, Ondřej. „Discontinuous Galerkin method for the solution of boundary-value problems in non-smooth domains“. Master's thesis, 2017. http://www.nusl.cz/ntk/nusl-367894.

Der volle Inhalt der Quelle
Annotation:
This thesis is concerned with the analysis of the finite element method and the discontinuous Galerkin method for the numerical solution of an elliptic boundary value problem with a nonlinear Newton boundary condition in a two-dimensional polygonal domain. The weak solution loses regularity in a neighbourhood of boundary singularities, which may be at corners or at roots of the weak solution on edges. The main attention is paid to the study of error estimates. It turns out that the order of convergence is not dampened by the nonlinearity, if the weak solution is nonzero on a large part of the boundary. If the weak solution is zero on the whole boundary, the nonlinearity only slows down the convergence of the function values but not the convergence of the gradient. The same analysis is carried out for approximate solutions obtained with numerical integration. The theoretical results are verified by numerical experiments. 1
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Devaraj, G. „Schemes for Smooth Discretization And Inverse Problems - Case Study on Recovery of Tsunami Source Parameters“. Thesis, 2016. http://hdl.handle.net/2005/2719.

Der volle Inhalt der Quelle
Annotation:
This thesis deals with smooth discretization schemes and inverse problems, the former used in efficient yet accurate numerical solutions to forward models required in turn to solve inverse problems. The aims of the thesis include, (i) development of a stabilization techniques for a class of forward problems plagued by unphysical oscillations in the response due to the presence of jumps/shocks/high gradients, (ii) development of a smooth hybrid discretization scheme that combines certain useful features of Finite Element (FE) and Mesh-Free (MF) methods and alleviates certain destabilizing factors encountered in the construction of shape functions using the polynomial reproduction method and, (iii) a first of its kind attempt at the joint inversion of both static and dynamic source parameters of the 2004 Sumatra-Andaman earthquake using tsunami sea level anomaly data. Following the introduction in Chapter 1 that motivates and puts in perspective the work done in later chapters, the main body of the thesis may be viewed as having two parts, viz., the first part constituting the development and use of smooth discretization schemes in the possible presence of destabilizing factors (Chapters 2 and 3) and the second part involving solution to the inverse problem of tsunami source recovery (Chapter 4). In the context of stability requirements in numerical solutions of practical forward problems, Chapter 2 develops a new stabilization scheme. It is based on a stochastic representation of the discretized field variables, with a view to reduce or even eliminate unphysical oscillations in the MF numerical simulations of systems developing shocks or exhibiting localized bands of extreme plastic deformation in the response. The origin of the stabilization scheme may be traced to nonlinear stochastic filtering and, consistent with a class of such filters, gain-based additive correction terms are applied to the simulated solution of the system, herein achieved through the Element-Free Galerkin (EFG) method, in order to impose a set of constraints that help arresting the spurious oscillations. The method is numerically illustrated through its application to a gradient plasticity model whose response is often characterized by a developing shear band as the external load is gradually increased. The potential of the method in stabilized yet accurate numerical simulations of such systems involving extreme gradient variations in the response is thus brought forth. Chapter 3 develops the MF-based discretization motif by balancing this with the widespread adoption of the FE method. Thus it concentrates on developing a 'hybrid' scheme that aims at the amelioration of certain destabilizing algorithmic issues arising from the necessary condition of moment matrix invertibility en route to the generation of smooth shape functions. It sets forth the hybrid discretization scheme utilizing bivariate simplex splines as kernels in a polynomial reproducing approach adopted over a conventional FE-like domain discretization based on Delaunay triangulation. Careful construction of the simplex spline knotset ensures the success of the polynomial reproduction procedure at all points in the domain of interest, a significant advancement over its precursor, the DMS-FEM. The shape functions in the proposed method inherit the global continuity ( C p 1 ) and local supports of the simplex splines of degree p . In the proposed scheme, the triangles comprising the domain discretization also serve as background cells for numerical integration which here are near-aligned to the supports of the shape functions (and their intersections), thus considerably ameliorating an oft-cited source of inaccuracy in the numerical integration of MF-based weak forms. Numerical experiments establish that the proposed method can work with lower order quadrature rules for accurate evaluation of integrals in the Galerkin weak form, a feature desiderated in solving nonlinear inverse problems that demand cost-effective solvers for the forward models. Numerical demonstrations of optimal convergence rates for a few test cases are given and the hybrid method is also implemented to compute crack-tip fields in a gradient-enhanced elasticity model. Chapter 4 attempts at the joint inversion of earthquake source parameters for the 2004 Sumatra-Andaman event from the tsunami sea level anomaly signals available from satellite altimetry. Usual inversion for earthquake source parameters incorporates subjective elements, e.g. a priori constraints, posing and parameterization, trial-and-error waveform fitting etc. Noisy and possibly insufficient data leads to stability and non-uniqueness issues in common deterministic inversions. A rational accounting of both issues favours a stochastic framework which is employed here, leading naturally to a quantification of the commonly overlooked aspects of uncertainty in the solution. Confluence of some features endows the satellite altimetry for the 2004 Sumatra-Andaman tsunami event with unprecedented value for the inversion of source parameters for the entire rupture duration. A nonlinear joint inversion of the slips, rupture velocities and rise times with minimal a priori constraints is undertaken. Large and hitherto unreported variances in the parameters despite a persistently good waveform fit suggest large propagation of uncertainties and hence the pressing need for better physical models to account for the defect dynamics and massive sediment piles. Chapter 5 concludes the work with pertinent comments on the results obtained and suggestions for future exploration of some of the schemes developed here.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie