Academic literature on the topic 'Multi-Scale numerical methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multi-Scale numerical methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Multi-Scale numerical methods"

1

Liu, Wing Kam, Su Hao, Ted Belytschko, Shaofan Li, and Chin Tang Chang. "Multi-scale methods." International Journal for Numerical Methods in Engineering 47, no. 7 (March 10, 2000): 1343–61. http://dx.doi.org/10.1002/(sici)1097-0207(20000310)47:7<1343::aid-nme828>3.0.co;2-w.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Podsiadlo, P., and G. W. Stachowiak. "Multi-scale representation of tribological surfaces." Proceedings of the Institution of Mechanical Engineers, Part J: Journal of Engineering Tribology 216, no. 6 (June 1, 2002): 463–79. http://dx.doi.org/10.1243/135065002762355361.

Full text
Abstract:
Many numerical surface topography analysis methods exist today. However, even for the moderately complicated topography of a tribological surface these methods can provide only limited information. The reason is that tribological surfaces often exhibit a non-stationary and multi-scale nature while the numerical methods currently used work well with surface data exhibiting a stationary random process and provide surface descriptors closely related to a scale at which surface data were acquired. The suitability of different methods, including Fourier transform, windowed Fourier transform, Cohen's class distributions (especially the Wigner-Ville distribution), wavelet transform, fractal methods and a hybrid fractal-wavelet method, for the analysis of tribological surface topographies is investigated in this paper. The method best suited to this purpose has been selected.
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Miao, Yan Cao, Zhijie Wang, and Chaorui Nie. "Multi-scale Numerical Simulation of Powder Metallurgy Densification Process." Journal of Physics: Conference Series 2501, no. 1 (May 1, 2023): 012022. http://dx.doi.org/10.1088/1742-6596/2501/1/012022.

Full text
Abstract:
Abstract In order to reduce defects such as pores, gold phases and cracks in powder metallurgy, scholars have studied the densification process of powder metallurgy. Based on the study of the powder metallurgy deformation mechanism, this paper classifies and summarizes the numerical simulation theory and the methods. At present, the numerical simulation of the densification process of powder metallurgy is carried out mainly in macroscopic, mesoscopic and microscopic directions. Macro scale is an application of finite element method based on continuum theory. The meso-scale is an application of the discrete element method based on the discontinuous media theory. Cellular automata simulation is the main numerical simulation method in the microscale. Different modeling theories and methods have their own adaptability and limitations. By combining the numerical simulation theory and the method of various scales, the process of densification of the material can be realized more accurately and accurately.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Li, Ya-Ling He, Qinjun Kang, and Wen-Quan Tao. "Coupled numerical approach combining finite volume and lattice Boltzmann methods for multi-scale multi-physicochemical processes." Journal of Computational Physics 255 (December 2013): 83–105. http://dx.doi.org/10.1016/j.jcp.2013.07.034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Engquist, B., and P. E. Souganidis. "Asymptotic and numerical homogenization." Acta Numerica 17 (April 25, 2008): 147–90. http://dx.doi.org/10.1017/s0962492906360011.

Full text
Abstract:
Homogenization is an important mathematical framework for developing effective models of differential equations with oscillations. We include in the presentation techniques for deriving effective equations, a brief discussion on analysis of related limit processes and numerical methods that are based on homogenization principles. We concentrate on first- and second-order partial differential equations and present results concerning both periodic and random media for linear as well as nonlinear problems. In the numerical sections, we comment on computations of multi-scale problems in general and then focus on projection-based numerical homogenization and the heterogeneous multi-scale method.
APA, Harvard, Vancouver, ISO, and other styles
6

Krause, Rolf, and Christina Mohr. "Level set based multi-scale methods for large deformation contact problems." Applied Numerical Mathematics 61, no. 4 (April 2011): 428–42. http://dx.doi.org/10.1016/j.apnum.2010.11.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gai, Wen Hai, R. Guo, and Jun Guo. "Molecular Dynamics Approach and its Application in the Analysis of Multi-Scale." Applied Mechanics and Materials 444-445 (October 2013): 1364–69. http://dx.doi.org/10.4028/www.scientific.net/amm.444-445.1364.

Full text
Abstract:
Numerical simulation of the behavior of materials can be used as a versatile, efficient and low cost tool for developing an understanding of material behavior [. The numerical simulation methods include quantum mechanics, molecular dynamics, Voronoi cell finite element method and finite element method et al. These methods themselves are not sufficient for many fundamental problems in computational mechanics, and the deficiencies lead to the thrust of multiple-scale methods. The multi-scale method to model micro-scale systems by coupled continuum mechanics and molecular dynamics was introduced. This paper describes the basic methods of multi-scale and general simulation process of molecular dynamics was reviewed.
APA, Harvard, Vancouver, ISO, and other styles
8

Tremmel, Stephan, Max Marian, Benedict Rothammer, Tim Weikert, and Sandro Wartzack. "Designing Amorphous Carbon Coatings Using Numerical and Experimental Methods within a Multi-Scale Approach." Defect and Diffusion Forum 404 (October 2020): 77–84. http://dx.doi.org/10.4028/www.scientific.net/ddf.404.77.

Full text
Abstract:
Amorphous carbon coatings have the potential to effectively reduce friction and wear in tribotechnical systems. The appropriate application of amorphous carbon layers requires both, a very good understanding of the tribological system and knowledge of the relationships between the fabrication of the coatings and their properties. In technical practice, however, the coatings’ development and their selection on the one hand and the design of the tribological system and its environment on the other hand are usually very strongly separated. The present work therefore aims to motivate the integrated development of tribotechnical systems with early consideration of the potential of amorphous carbon coatings. An efficient integrated development process is presented, which makes it possible to determine the boundary conditions and the load collective of the tribological system based upon an overall system and to derive the requirements for a tailored coating. In line with the nature of tribology, this approach must cover several scales. In this respect, the development process follows a V-model. The left branch of the V-model is mainly based upon a simulation chain including multibody and contact simulations. The right branch defines an experimental test chain comprising coating characterization to refine the contact simulation iteratively and tribological testing on different levels to validate the function fulfillment. Within this contribution, the outlined approach is illustrated by two use cases, namely the cam/tappet-pairing and the total knee replacement.
APA, Harvard, Vancouver, ISO, and other styles
9

Schmidt, Alexander A., Yuri V. Trushin, K. L. Safonov, V. S. Kharlamov, Dmitri V. Kulikov, Oliver Ambacher, and Jörg Pezoldt. "Multi-Scale Simulation of MBE-Grown SiC/Si Nanostructures." Materials Science Forum 527-529 (October 2006): 315–18. http://dx.doi.org/10.4028/www.scientific.net/msf.527-529.315.

Full text
Abstract:
The main obstacle for the implementation of numerical simulation for the prediction of the epitaxial growth is the variety of physical processes with considerable differences in time and spatial scales taking place during epitaxy: deposition of atoms, surface and bulk diffusion, nucleation of two-dimensional and three-dimensional clusters, etc. Thus, it is not possible to describe all of them in the framework of a single physical model. In this work there was developed a multi-scale simulation method for molecular beam epitaxy (MBE) of silicon carbide nanostructures on silicon. Three numerical methods were used in a complex: Molecular Dynamics (MD), kinetic Monte Carlo (KMC), and the Rate Equations (RE). MD was used for the estimation of kinetic parameters of atoms at the surface, which are input parameters for other simulation methods. The KMC allowed the atomic-scale simulation of the cluster formation, which is the initial stage of the SiC growth, while the RE method gave the ability to study the growth process on a longer time scale. As a result, a full-scale description of the surface evolution during SiC formation on Si substrates was developed.
APA, Harvard, Vancouver, ISO, and other styles
10

Altmann, Robert, Patrick Henning, and Daniel Peterseim. "Numerical homogenization beyond scale separation." Acta Numerica 30 (May 2021): 1–86. http://dx.doi.org/10.1017/s0962492921000015.

Full text
Abstract:
Numerical homogenization is a methodology for the computational solution of multiscale partial differential equations. It aims at reducing complex large-scale problems to simplified numerical models valid on some target scale of interest, thereby accounting for the impact of features on smaller scales that are otherwise not resolved. While constructive approaches in the mathematical theory of homogenization are restricted to problems with a clear scale separation, modern numerical homogenization methods can accurately handle problems with a continuum of scales. This paper reviews such approaches embedded in a historical context and provides a unified variational framework for their design and numerical analysis. Apart from prototypical elliptic model problems, the class of partial differential equations covered here includes wave scattering in heterogeneous media and serves as a template for more general multi-physics problems.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Multi-Scale numerical methods"

1

Holst, Henrik. "Multi-scale methods for wave propagation in heterogeneous media." Licentiate thesis, Stockholm : Datavetenskap och kommunikation, Kungliga Tekniska högskolan, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-10511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Corbin, Gregor [Verfasser], and Axel [Akademischer Betreuer] Klar. "Numerical methods for multi-scale cell migration models / Gregor Corbin ; Betreuer: Axel Klar." Kaiserslautern : Technische Universität Kaiserslautern, 2020. http://d-nb.info/1222974096/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Reboul, Louis. "Development and analysis of efficient multi-scale numerical methods, with applications to plasma discharge simulations relying on multi-fluid models." Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAX134.

Full text
Abstract:
Notre objectif principal est le développement et l’analyse de schémas numériques multi-échelles pour la simulation de modèles multi-fluides appliqués au plasma froids basse-pression. La configuration d’intérêt inclut typiquement l’apparition d’instabilités et de gaines, des zones chargées micrométriques qui se forment lors du contact d’un plasma avec des parois. Le modèle prototype est le système d’Euler-Poisson isotherme, mais nous considérons également les modèles simplifiés des équations hyperboliques de la chaleur et d’Euler-friction isotherme pour analyser et développer les méthodes numériques. Dans un premier axe, nous développons et analysons un schéma préservant l’asymptotique d’ordre deux couplé temps-espace implicite-explicite pour les équations hyperboliques de la chaleur (cas linéaire). Nous étendons ensuite la méthode au cas non-linéaire des équations d’Euler-friction isotherme avec coefficient de relaxation non-uniforme. Nous montrons également des résultats théoriques supplémentaires sur des limiteurs de flux adaptés schémas préservant l’asymptotique, et sur une nouvelle méthode équilibre. Dans un second axe, nous proposons plusieurs méthodes numériques pour les équations d’Euler-Poisson ayant une meilleure résolution des configurations avec gaines. Dans un dernier axe, ces méthodes sont utilisées pour effectuer une étude paramétrique de gaines isothermes 2D rectangulaires, à divers régimes collisionnels et ratios d’aspect. Nous comparons nos résultats à des simulations PIC et à des solutions de référence. Nous montrons qu’une simulation d’un modèle fluide avec une méthode numérique adaptée permet une accélération substantielle du calcul et une meilleure précision de la solution obtenue. Nous discutons de l’extension des méthodes multi-échelles aux équations d’Euler complètes et au cas magnétisé dans les perspectives de notre travail
Our main focus is the design and analysis of multi-scale numerical schemes for the simulation of multi-fluid models applied to low-temperature low-pressure plasmas. Our typical configuration of interest includes the onset of instabilities and sheaths, i.e. micrometric charged boundary layers that form at the plasma chamber walls. Our prototypical plasma model is the isothermal Euler-Poisson system of equations, but we also consider simpler models, the hyperbolic heat equations and the isothermal Euler-friction Equations, for the development and analysis of numerical methods. In a first axis, we develop and analyze a uniformly asymptotic-preserving second-order time-space coupling implicit-explicit method for the hyperbolic heat equations (linear case). We provide theoretical results on flux limiters for asymptotic-preserving methods, and a new well-balanced strategy. In a second axis, we propose several methods for the Euler-Poisson system of equations, to improve the accuracy of simulations of configurations featuring sheaths. In a third axis, we use these methods to conduct a parametric study of a 2D (rectangular) isothermal non-magnetized plasma discharge with sheaths, at various collisional regimes and aspect-ratios. We compare our result to PIC simulations and reference solutions. We show that simulating a fluid model with a tailored numerical method substantially reduces the time of simulation and improves the accuracy of the obtained solution. A discussion on the extensions of the multi-scale methods for the full non-isothermal Euler equations and to highly-magnetized cases is provided in the perspectives of our work
APA, Harvard, Vancouver, ISO, and other styles
4

Peters, Andreas [Verfasser], and Moctar Bettar Ould [Akademischer Betreuer] el. "Numerical Modelling and Prediction of Cavitation Erosion Using Euler-Euler and Multi-Scale Euler-Lagrange Methods / Andreas Peters ; Betreuer: Bettar Ould el Moctar." Duisburg, 2020. http://d-nb.info/1203066783/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Del, Masto Alessandra. "Transition d’échelle entre fibre végétale et composite UD : propagation de la variabilité et des non-linéarités." Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCD022/document.

Full text
Abstract:
Bien que les matériaux composites renforcés par fibres végétales (PFCs) représentent une solution attractive pour la conception de structures légères, performantes et à faible coût environnemental, leur développement nécessite des études approfondies concernant les mécanismes à la base du comportement non-linéaire en traction exprimé, ainsi que de la variabilité des propriétés mécaniques. Compte tenu de leur caractère multi-échelle, ces travaux de thèse visent à contribuer, via une approche numérique, à l’étude de la propagation de comportement à travers les échelles des PFCs. Dans un premier temps, l’étude se focalise sur l’échelle fibre : un modèle 3D de comportement de la paroi est d’abord implémenté dans un calcul EF, afin d’établir l’influence de la morphologie de la fibre sur le comportement exprimé. Une fois l’impact non négligeable de la morphologie déterminé, une étude des liens entre morphologie, matériau et ultrastructure et comportement en traction est menée via une analyse de sensibilité dans le cas du lin et du chanvre. La deuxième partie du travail es dédiée à l’échelle du pli de composite. Une nouvelle approche multi-échelle stochastique est développée et implémentée. Elle est basée sur la définition d’un volume élémentaire (VE) à microstructure aléatoire pour décrire le comportement du pli. L’approche est ensuite utilisée pour étudier la sensibilité du comportement du VE aux paramètres nano, micro et mésoscopiques. L’analyse de sensibilité, menée via le développement de la réponse sur la base du chaos polynomial, nous permet ainsi de construire un métamodèle du comportement du pli
Although plant-fiber reinforced composites (PFCs) represent an attractive solution for the design of lightweight, high performance and low environmental cost structures, their development requires in-depth studies of the mechanisms underlying their nonlinear tensile behavior, as well as variability of mechanical properties. Given their multi-scale nature, this thesis aims to contribute, using a numerical approach, to the study of the propagation of behavior across the scales of PFCs. Firstly, the study focuses on the fiber scale: a 3D model of the behavior of the wall is first implemented in an EF calculation, in order to establish the influence of fiber morphology on the tensile behavior. Once the non-negligible impact of the morphology has been determined, a study of the links between morphology, material and ultrastructure and tensile behavior is conducted via a sensitivity analysis in the case of flax and hemp. The second part of the work is dedicated to the composite ply scale. A new stochastic multi-scale approach is developed and implemented. It is based on the definition of an elementary volume (VE) with random microstructure to describe the behavior of the ply. The approach is then used to study the sensitivity of VE behavior to nano, micro and mesoscopic parameters. Sensitivity analysis, conducted via the development of the response on the basis of polynomial chaos, allows us to construct a metamodel of the tensile behavior of the ply
APA, Harvard, Vancouver, ISO, and other styles
6

Tchikaya, Euloge Budet. "Modélisation électromagnétique des Surfaces Sélectives en Fréquence finies uniformes et non-uniformes par la Technique de Changement d'Echelle (SCT)." Thesis, Toulouse, INPT, 2010. http://www.theses.fr/2010INPT0100/document.

Full text
Abstract:
Les structures planaires de tailles finies sont de plus en plus utilisées dans les applications des satellites et des radars. Deux grands types de ces structures sont les plus utilisés dans le domaine de la conception RF à savoir Les Surfaces Sélectives en Fréquence (FSS) et les Reflectarrays. Les FSSs sont un élément clé dans la conception de systèmes multifréquences. Elles sont utilisées comme filtre en fréquence, et trouvent des applications telles que les radômes, les réflecteurs pour antenne Cassegrain, etc. Les performances des FSSs sont généralement évaluées en faisant l'hypothèse d'une FSS de dimension infinie et périodique en utilisant les modes de Floquet, le temps de calcul étant alors réduit quasiment à celui de la cellule élémentaire. Plusieurs méthodes permettant la prise en compte de la taille finie des FSSs ont été développées. La méthode de Galerkin basée sur l'approche rigoureuse permet la prise en compte des interactions entre les différents éléments du réseau, mais cette technique ne fonctionne que pour les FSSs de petite taille, typiquement 3x3 éléments. Pour les grands réseaux, cette méthode n'est plus adaptée, car le temps de calcul et l'exigence en mémoire deviennent trop grands. Donc, une autre approche est utilisée, celle basée sur la décomposition spectrale en onde plane. Elle permet de considérer un réseau fini comme un réseau périodique infini, illuminé partiellement par une onde plane. Avec cette approche, des FSSs de grande taille sont simulées, mais elle ne permet pas dans la plupart des cas, de prendre en compte les couplages qui existent entre les différentes cellules du réseau, les effets de bord non plus. La simulation des FSSs par les méthodes numériques classiques basées sur une discrétisation spatiale (méthode des éléments finis, méthode des différences finies, méthode des moments) ou spectrale (méthodes modales) aboutit souvent à des matrices mal conditionnées, des problèmes de convergence numérique et/ou des temps de calcul excessifs. Pour éviter tous ces problèmes, une technique appelée technique par changements d'échelle tente de résoudre ces problèmes. Elle est basée sur le partitionnement de la géométrie du réseau en plusieurs sous-domaines imbriqués, définis à différents niveaux d'échelle du réseau. Le multi-pôle de changement d'échelle, appelé Scale-Changing Networks (SCN), modélise le couplage électromagnétique entre deux échelles successives. La cascade de ces multi-pôles de changement d'échelle, permet le calcul de la matrice d'impédance de surface de la structure complète et donc la modélisation globale du réseau. Ceci conduit à une réduction significative en termes de temps de calcul et d'espace mémoire par rapport aux méthodes numériques classiques. Comme le calcul des multi-pôles de changement d'échelle est mutuellement indépendant, les temps d'exécution peuvent encore être réduits de manière significative en parallélisant le calcul. La SCT permet donc de modéliser des FSSs Finies tout en prenant en compte le couplage entre les éléments adjacents du réseau
The finite size planar structures are increasingly used in applications of satellite and radar. Two major types of these structures are the most used in the field of RF design ie Frequency Selective Surfaces (FSS) and the Reflectarrays. The FSSs are a key element in the design of multifrequency systems. They are used as frequency filter, and find applications such as radomes, reflector Cassegrain antenna, etc.. The performances of FSSs are generally evaluated by assuming an infinite dimensional FSS using periodic Floquet modes, the computation time is then reduced almost to that of the elementary cell. Several methods have been developed for taking into account the finite dimensions of arrays. For example the Galerkin method uses a rigorous element by element approach. With this method, the exact interactions between the elements are taken into account but this technique works only for small FSS, typically 3x3 elements. For larger surfaces, this method is no more adapted. The computation time and the memory requirement become too large. So another approach is used based on plane wave spectral decomposition. It allows considering the finite problem as a periodic infinite one locally illuminated. With this approach, large FSS are indeed simulated, but the exact interactions between the elements are not taken into account, the edge effects either. The simulation of FSS by conventional numerical methods based on spatial meshing (finite element method, finite difference, method of moments) or spectral (modal methods) often leads in the practice to poorly conditioned matrices, numerical convergence problems or/and excessive computation time. To avoid these problems, a new technique called Scale Changing Technique attempts to solve these problems. The SCT is based on the partition of discontinuity planes in multiple planar sub-domains of various scale levels. In each sub- omain the higher-order modes are used for the accurate representation of the electromagnetic field local variations while low-order modes are used for coupling the various scale levels. The electromagnetic coupling between scales is modelled by a Scale Changing Network (SCN). As the calculation of SCN is mutually independent, the execution time can still be significantly reduced by parallelizing the computation. With the SCT, we can simulate large finite FSS, taking into account the exact interactions between elements, while addressing the problem of excessive computation time and memory
APA, Harvard, Vancouver, ISO, and other styles
7

Garcia, Trillos Camilo Andrés. "Méthodes numériques probabilistes : problèmes multi-échelles et problèmes de champs moyen." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00944655.

Full text
Abstract:
Cette thèse traite de la solution numérique de deux types de problèmes stochastiques. Premièrement, nous nous intéressons aux EDS fortement oscillantes, c'est-à-dire, les systèmes composés de variables ergodiques évoluant rapidement par rapport aux autres. Nous proposons un algorithme basé sur des résultats d'homogénéisation. Il est défini par un schéma d'Euler appliqué aux variables lentes couplé avec un estimateur à pas décroissant pour approcher la limite ergodique des variables rapides. Nous prouvons la convergence forte de l'algorithme et montrons que son erreur normalisée satisfait un résultat du type théorème limite centrale généralisé. Nous proposons également une version extrapolée de l'algorithme ayant une meilleure complexité asymptotique en satisfaisant les mêmes propriétés que la version originale. Ensuite, nous étudions la solution des EDS de type McKean-Vlasov (EDSPR-MKV) associées à la solution de certains problèmes de contrôle sous un environnement formé d'un grand nombre de particules ayant des interactions du type champ-moyen. D'abord, nous présentons un nouvel algorithme, basé sur la méthode de cubature sur l'espace de Wiener, pour approcher faiblement la solution d'une EDS du type McKean-Vlasov. Il est déterministe et peut être paramétré pour atteindre tout ordre de convergence souhaité. Puis, en utilisant ce nouvel algorithme, nous construisons deux schémas pour résoudre les EDSPR-MKV découplées et nous montrons que ces schémas ont des convergences d'ordres un et deux. Enfin, nous considérons le problème de réduction de la complexité de la méthode présentée tout en respectant la vitesse de convergence énoncée.
APA, Harvard, Vancouver, ISO, and other styles
8

Cordesse, Pierre. "Contribution to the study of combustion instabilities in cryotechnic rocket engines : coupling diffuse interface models with kinetic-based moment methods for primary atomization simulations." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASC016.

Full text
Abstract:
Gardiens de l’espace, les lanceurs de fusée font l’objet d’une amélioration continue et concurrentielle, grâce à des campagnes de tests expérimentaux et numériques. Les simulations prédictives sont devenues indispensables pour accroître notre compréhension de la physique. Ajustables, elles se prêtent parfaitement à la conception et l’optimisation, en particuliers de la chambre de combustion, pour garantir la sureté et maximiser l’efficacité. L’atomisation primaire est l’un des phénomènes déterminants de la combustion du combustible et de l’oxydant, pilotant à la fois la distribution de gouttes et les potentielles instabilités hautes-fréquences en conditions sous-critiques. Elle couvre un large spectre de topologies d’écoulement diphasique, depuis ceux de type phases séparées jusqu’à la phase dispersée, en passant par une région mixte caractérisée par la complexité de la physique à petites échelles et de la topologie de l’écoulement. Les modèles d’ordre réduit constituent de bons candidats pour réaliser des simulations numériques prédictives et relativement peu coûteuses en ressource de calcul sur des configurations industrielles. Cependant, jusqu’à présent ils ne décrivent correctement que la dynamique des grandes échelles et doivent donc être couplés à des modèles de phase dispersée nécessitant un réglage minutieux de paramètres pour prédire la formation du spray. Afin de décrire à la fois les régions mixte et dispersée, l’amélioration de la hiérarchie de modèles d’ordre réduit repose sur quelques principes clefs au cœur de la thèse ci-présente et fournit des problèmes interdisciplinaires faisant appel tant à l’analyse mathématique et la modélisation physique de ces systèmes d’EDP qu’à leur discrétisation numérique et leur implémentation dans des codes de CFD à des fins industriels. Grâce d’une part à l’extension de la théorie des équations de conservation supplémentaires à des systèmes impliquant des termes non-conservatifs et d’autre part à un formalisme de thermodynamique multi-fluide tenant compte des effets non-idéaux, nous proposons de nouvelles pistes pour définir une entropie de mélange strictement convexe et consistante avec le système d’équation et les lois de pression, dans le but de permettre la symmétrisation entropique des modèles diphasiques, de prouver leur hyperbolicité et d’obtenir des termes sources généraux. De plus, en rompant avec la vision géométrique de l’interface, nous proposons une description multi-échelle de l’interface pour décrire un mélange multi-fluide comportant une dynamique interfaciale complexe. Le Principe de Moindre Action a permis de dériver un modèle bifluide à une vitesse couplant grandes et petites échelles de l’écoulement. Nous avons ensuite développé une stratégie de séparation d’opérateurs basée sur la discrétisation par Volumes Finis, et nous avons implémenté le nouveau modèle dans le logiciel industriel multiphysique de CFD, CEDRE, de l’ONERA afin d’évaluer numériquement ce dernier. Enfin, nous avons construit et analysé les fondations d’une hiérarchie de cas tests accessibles à la DNS tout en étant au plus proche de configurations industrielles, dans le but d’évaluer les résultats de simulations du nouveau modèle ou de tout autre modèle à venir
Gatekeepers to the open space, launchers are subject to intense and competitive enhancements, through experimental and numerical test campaigns. Predictive numerical simulations have become mandatory to increase our understanding of the physics. Adjustable, they provide early-stage optimization processes, in particular of the combustion chamber, to guaranty safety and maximize efficiency. One of the major physical phenomenon involved in the combustion of the fuel and oxidizer is the jet atomization, which pilotes both the droplet distributions and the potential high-frequency instabilities in subcritical conditions. It encompasses a large sprectrum of two-phase flow topologies, from separated phases to disperse phase, with a mixed region where the small scale physics and topology of the flow are very complex. Reduced-order models are good candidates to perform predictive but low CPU demanding simulations on industrial configurations but have only been able so far to capture large scale dynamics and have to be coupled to disperse phase models through adjustable and weakly reliable parameters in order to predict spray formation. Improving the hierarchy of reduced order models in order to better describe both the mixed region and the disperse region requires a series of building blocks at the heart of the present work and give on to complex problems in the mathematical analysis and physical modelling of these systems of PDE as well as their numerical discretization and implementation in CFD codes for industrial uses. Thanks to the extension of the theory on supplementary conservative equations to system of non-conservation laws and the formalism of the multi-fluid thermodynamics accounting for non-ideal effects, we give some new leads to define a strictly convex mixture entropy consistent with the system of equations and the pressure laws, which would allow to recover the entropic symmetrization of two-phase flow models, prove their hyperbolicity and obtain generalized source terms. Furthermore, we have departed from a geometric approach of the interface and proposed a multi-scale rendering of the interface to describe multi-fluid flow with complex interface dynamics. The Stationary Action Principle has returned a single velocity two-phase flow model coupling large and small scales of the flow. We then have developed a splitting strategy based on a Finite Volume discretization and have implemented the new model in the industrial CFD software CEDRE of ONERA to proceed to a numerical verification. Finally, we have constituted and investigated a first building block of a hierarchy of test-cases designed to be amenable to DNS while close enough to industrial configurations in order to assess the simulation results of the new model but also to any up-coming models
APA, Harvard, Vancouver, ISO, and other styles
9

HUI, YANCHUAN. "Multi-scale Modelling and Design of Composite Structures." Doctoral thesis, Politecnico di Torino, 2019. http://hdl.handle.net/11583/2739922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Badillo, Almaraz Hiram. "Numerical modelling based on the multiscale homogenization theory. Application in composite materials and structures." Doctoral thesis, Universitat Politècnica de Catalunya, 2012. http://hdl.handle.net/10803/83924.

Full text
Abstract:
A multi-domain homogenization method is proposed and developed in this thesis based on a two-scale technique. The method is capable of analyzing composite structures with several periodic distributions by partitioning the entire domain of the composite into substructures making use of the classical homogenization theory following a first-order standard continuum mechanics formulation. The need to develop the multi-domain homogenization method arose because current homogenization methods are based on the assumption that the entire domain of the composite is represented by one periodic or quasi-periodic distribution. However, in some cases the structure or composite may be formed by more than one type of periodic domain distribution, making the existing homogenization techniques not suitable to analyze this type of cases in which more than one recurrent configuration appears. The theoretical principles used in the multi-domain homogenization method were applied to assemble a computational tool based on two nested boundary value problems represented by a finite element code in two scales: a) one global scale, which treats the composite as an homogeneous material and deals with the boundary conditions, the loads applied and the different periodic (or quasi-periodic) subdomains that may exist in the composite; and b) one local scale, which obtains the homogenized response of the representative volume element or unit cell, that deals with the geometry distribution and with the material properties of the constituents. The method is based on the local periodicity hypothesis arising from the periodicity of the internal structure of the composite. The numerical implementation of the restrictions on the displacements and forces corresponding to the degrees of freedom of the domain's boundary derived from the periodicity was performed by means of the Lagrange multipliers method. The formulation included a method to compute the homogenized non-linear tangent constitutive tensor once the threshold of nonlinearity of any of the unit cells has been surpassed. The procedure is based in performing a numerical derivation applying a perturbation technique. The tangent constitutive tensor is computed for each load increment and for each iteration of the analysis once the structure has entered in the non-linear range. The perturbation method was applied at the global and local scales in order to analyze the performance of the method at both scales. A simple average method of the constitutive tensors of the elements of the cell was also explored for comparison purposes. A parallelization process was implemented on the multi-domain homogenization method in order to speed-up the computational process due to the huge computational cost that the nested incremental-iterative solution embraces. The effect of softening in two-scale homogenization was investigated following a smeared cracked approach. Mesh objectivity was discussed first within the classical one-scale FE formulation and then the concepts exposed were extrapolated into the two-scale homogenization framework. The importance of the element characteristic length in a multi-scale analysis was highlighted in the computation of the specific dissipated energy when strain-softening occurs. Various examples were presented to evaluate and explore the capabilities of the computational approach developed in this research. Several aspects were studied, such as analyzing different composite arrangements that include different types of materials, composites that present softening after the yield point is reached (e.g. damage and plasticity) and composites with zones that present high strain gradients. The examples were carried out in composites with one and with several periodic domains using different unit cell configurations. The examples are compared to benchmark solutions obtained with the classical one-scale FE method.
En esta tesis se propone y desarrolla un método de homogeneización multi-dominio basado en una técnica en dos escalas. El método es capaz de analizar estructuras de materiales compuestos con varias distribuciones periódicas dentro de un mismo continuo mediante la partición de todo el dominio del material compuesto en subestructuras utilizando la teoría clásica de homogeneización a través de una formulación estándar de mecánica de medios continuos de primer orden. La necesidad de desarrollar este método multi-dominio surgió porque los métodos actuales de homogeneización se basan en el supuesto de que todo el dominio del material está representado por solo una distribución periódica o cuasi-periódica. Sin embargo, en algunos casos, la estructura puede estar formada por más de un tipo de distribución de dominio periódico. Los principios teóricos desarrollados en el método de homogeneización multi-dominio se aplicaron para ensamblar una herramienta computacional basada en dos problemas de valores de contorno anidados, los cuales son representados por un código de elementos finitos (FE) en dos escalas: a) una escala global, que trata el material compuesto como un material homogéneo. Esta escala se ocupa de las condiciones de contorno, las cargas aplicadas y los diferentes subdominios periódicos (o cuasi-periódicos) que puedan existir en el material compuesto; y b) una escala local, que obtiene la respuesta homogenizada de un volumen representativo o celda unitaria. Esta escala se ocupa de la geometría, y de la distribución espacial de los constituyentes del compuesto así como de sus propiedades constitutivas. El método se basa en la hipótesis de periodicidad local derivada de la periodicidad de la estructura interna del material. La implementación numérica de las restricciones de los desplazamientos y las fuerzas derivadas de la periodicidad se realizaron por medio del método de multiplicadores de Lagrange. La formulación incluye un método para calcular el tensor constitutivo tangente no-lineal homogeneizado una vez que el umbral de la no-linealidad de cualquiera de las celdas unitarias ha sido superado. El procedimiento se basa en llevar a cabo una derivación numérica aplicando una técnica de perturbación. El tensor constitutivo tangente se calcula para cada incremento de carga y para cada iteración del análisis una vez que la estructura ha entrado en el rango no-lineal. El método de perturbación se aplicó tanto en la escala global como en la local con el fin de analizar la efectividad del método en ambas escalas. Se lleva a cabo un proceso de paralelización en el método con el fin de acelerar el proceso de cómputo debido al enorme coste computacional que requiere la solución iterativa incremental anidada. Se investiga el efecto de ablandamiento por deformación en el material usando el método de homogeneización en dos escalas a través de un enfoque de fractura discreta. Se estudió la objetividad en el mallado dentro de la formulación clásica de FE en una escala y luego los conceptos expuestos se extrapolaron en el marco de la homogeneización de dos escalas. Se enfatiza la importancia de la longitud característica del elemento en un análisis multi-escala en el cálculo de la energía específica disipada cuando se produce el efecto de ablandamiento. Se presentan varios ejemplos para evaluar la propuesta computacional desarrollada en esta investigación. Se estudiaron diferentes configuraciones de compuestos que incluyen diferentes tipos de materiales, así como compuestos que presentan ablandamiento después de que el punto de fluencia del material se alcanza (usando daño y plasticidad) y compuestos con zonas que presentan altos gradientes de deformación. Los ejemplos se llevaron a cabo en materiales compuestos con uno y con varios dominios periódicos utilizando diferentes configuraciones de células unitarias. Los ejemplos se comparan con soluciones de referencia obtenidas con el método clásico de elementos finitos en una escala.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Multi-Scale numerical methods"

1

Zeitlin, Vladimir. Geophysical Fluid Dynamics. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198804338.001.0001.

Full text
Abstract:
The book explains the key notions and fundamental processes in the dynamics of the fluid envelopes of the Earth (transposable to other planets), and methods of their analysis, from the unifying viewpoint of rotating shallow-water model (RSW). The model, in its one- or two-layer versions, plays a distinguished role in geophysical fluid dynamics, having been used for around a century for conceptual understanding of various phenomena, for elaboration of approaches and methods, to be applied later in more complete models, for development and testing of numerical codes and schemes of data assimilations, and many other purposes. Principles of modelling of large-scale atmospheric and oceanic flows, and corresponding approximations, are explained and it is shown how single- and multi-layer versions of RSW arise from the primitive equations by vertical averaging, and how further time-averaging produces celebrated quasi-geostrophic reductions of the model. Key concepts of geophysical fluid dynamics are exposed and interpreted in RSW terms, and fundamentals of vortex and wave dynamics are explained in Part 1 of the book, which is supplied with exercises and can be used as a textbook. Solutions of the problems are available at Editorial Office by request. In-depth treatment of dynamical processes, with special accent on the primordial process of geostrophic adjustment, on instabilities in geophysical flows, vortex and wave turbulence and on nonlinear wave interactions follows in Part 2. Recently arisen new approaches in, and applications of RSW, including moist-convective processes constitute Part 3.
APA, Harvard, Vancouver, ISO, and other styles
2

Sobczyk, Eugeniusz Jacek. Uciążliwość eksploatacji złóż węgla kamiennego wynikająca z warunków geologicznych i górniczych. Instytut Gospodarki Surowcami Mineralnymi i Energią PAN, 2022. http://dx.doi.org/10.33223/onermin/0222.

Full text
Abstract:
Hard coal mining is characterised by features that pose numerous challenges to its current operations and cause strategic and operational problems in planning its development. The most important of these include the high capital intensity of mining investment projects and the dynamically changing environment in which the sector operates, while the long-term role of the sector is dependent on factors originating at both national and international level. At the same time, the conditions for coal mining are deteriorating, the resources more readily available in active mines are being exhausted, mining depths are increasing, temperature levels in pits are rising, transport routes for staff and materials are getting longer, effective working time is decreasing, natural hazards are increasing, and seams with an increasing content of waste rock are being mined. The mining industry is currently in a very difficult situation, both in technical (mining) and economic terms. It cannot be ignored, however, that the difficult financial situation of Polish mining companies is largely exacerbated by their high operating costs. The cost of obtaining coal and its price are two key elements that determine the level of efficiency of Polish mines. This situation could be improved by streamlining the planning processes. This would involve striving for production planning that is as predictable as possible and, on the other hand, economically efficient. In this respect, it is helpful to plan the production from operating longwalls with full awareness of the complexity of geological and mining conditions and the resulting economic consequences. The constraints on increasing the efficiency of the mining process are due to the technical potential of the mining process, organisational factors and, above all, geological and mining conditions. The main objective of the monograph is to identify relations between geological and mining parameters and the level of longwall mining costs, and their daily output. In view of the above, it was assumed that it was possible to present the relationship between the costs of longwall mining and the daily coal output from a longwall as a function of onerous geological and mining factors. The monograph presents two models of onerous geological and mining conditions, including natural hazards, deposit (seam) parameters, mining (technical) parameters and environmental factors. The models were used to calculate two onerousness indicators, Wue and WUt, which synthetically define the level of impact of onerous geological and mining conditions on the mining process in relation to: —— operating costs at longwall faces – indicator WUe, —— daily longwall mining output – indicator WUt. In the next research step, the analysis of direct relationships of selected geological and mining factors with longwall costs and the mining output level was conducted. For this purpose, two statistical models were built for the following dependent variables: unit operating cost (Model 1) and daily longwall mining output (Model 2). The models served two additional sub-objectives: interpretation of the influence of independent variables on dependent variables and point forecasting. The models were also used for forecasting purposes. Statistical models were built on the basis of historical production results of selected seven Polish mines. On the basis of variability of geological and mining conditions at 120 longwalls, the influence of individual parameters on longwall mining between 2010 and 2019 was determined. The identified relationships made it possible to formulate numerical forecast of unit production cost and daily longwall mining output in relation to the level of expected onerousness. The projection period was assumed to be 2020–2030. On this basis, an opinion was formulated on the forecast of the expected unit production costs and the output of the 259 longwalls planned to be mined at these mines. A procedure scheme was developed using the following methods: 1) Analytic Hierarchy Process (AHP) – mathematical multi-criteria decision-making method, 2) comparative multivariate analysis, 3) regression analysis, 4) Monte Carlo simulation. The utilitarian purpose of the monograph is to provide the research community with the concept of building models that can be used to solve real decision-making problems during longwall planning in hard coal mines. The layout of the monograph, consisting of an introduction, eight main sections and a conclusion, follows the objectives set out above. Section One presents the methodology used to assess the impact of onerous geological and mining conditions on the mining process. Multi-Criteria Decision Analysis (MCDA) is reviewed and basic definitions used in the following part of the paper are introduced. The section includes a description of AHP which was used in the presented analysis. Individual factors resulting from natural hazards, from the geological structure of the deposit (seam), from limitations caused by technical requirements, from the impact of mining on the environment, which affect the mining process, are described exhaustively in Section Two. Sections Three and Four present the construction of two hierarchical models of geological and mining conditions onerousness: the first in the context of extraction costs and the second in relation to daily longwall mining. The procedure for valuing the importance of their components by a group of experts (pairwise comparison of criteria and sub-criteria on the basis of Saaty’s 9-point comparison scale) is presented. The AHP method is very sensitive to even small changes in the value of the comparison matrix. In order to determine the stability of the valuation of both onerousness models, a sensitivity analysis was carried out, which is described in detail in Section Five. Section Six is devoted to the issue of constructing aggregate indices, WUe and WUt, which synthetically measure the impact of onerous geological and mining conditions on the mining process in individual longwalls and allow for a linear ordering of longwalls according to increasing levels of onerousness. Section Seven opens the research part of the work, which analyses the results of the developed models and indicators in individual mines. A detailed analysis is presented of the assessment of the impact of onerous mining conditions on mining costs in selected seams of the analysed mines, and in the case of the impact of onerous mining on daily longwall mining output, the variability of this process in individual fields (lots) of the mines is characterised. Section Eight presents the regression equations for the dependence of the costs and level of extraction on the aggregated onerousness indicators, WUe and WUt. The regression models f(KJC_N) and f(W) developed in this way are used to forecast the unit mining costs and daily output of the designed longwalls in the context of diversified geological and mining conditions. The use of regression models is of great practical importance. It makes it possible to approximate unit costs and daily output for newly designed longwall workings. The use of this knowledge may significantly improve the quality of planning processes and the effectiveness of the mining process.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Multi-Scale numerical methods"

1

Navas, Pedro, Susana López-Querol, Rena C. Yu, and Bo Li. "Meshfree Methods Applied to Consolidation Problems in Saturated Soils." In Innovative Numerical Approaches for Multi-Field and Multi-Scale Problems, 241–64. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-39022-2_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Georgiev, Vihar P., and Asen Asenov. "Multi-scale Computational Framework for Evaluating of the Performance of Molecular Based Flash Cells." In Numerical Methods and Applications, 196–203. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-15585-2_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Balarac, G., G. H. Cottet, J. M. Etancelin, J. B. Lagaert, F. Perignon, and C. Picard. "Multi-scale Problems, High Performance Computing and Hybrid Numerical Methods." In The Impact of Applications on Mathematics, 245–55. Tokyo: Springer Japan, 2014. http://dx.doi.org/10.1007/978-4-431-54907-9_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

A. Shah, Akeel, Puiki Leung, Qian Xu, Pang-Chieh Sui, and Wei Xing. "Numerical Simulation of Flow Batteries Using a Multi-scale Macroscopic-Mesoscopic Approach." In Engineering Applications of Computational Methods, 127–56. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-2524-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jehel, Pierre. "A Stochastic Multi-scale Approach for Numerical Modeling of Complex Materials—Application to Uniaxial Cyclic Response of Concrete." In Computational Methods in Applied Sciences, 123–60. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-27996-1_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sulsky, Deborah, and Ming Gong. "Improving the Material-Point Method." In Innovative Numerical Approaches for Multi-Field and Multi-Scale Problems, 217–40. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-39022-2_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Diestmann, Thomas, Nils Broedling, Benedict Götz, and Tobias Melz. "Surrogate Model-Based Uncertainty Quantification for a Helical Gear Pair." In Lecture Notes in Mechanical Engineering, 191–207. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-77256-7_16.

Full text
Abstract:
AbstractCompetitive industrial transmission systems must perform most efficiently with reference to complex requirements and conflicting key performance indicators. This design challenge translates into a high-dimensional multi-objective optimization problem that requires complex algorithms and evaluation of computationally expensive simulations to predict physical system behavior and design robustness. Crucial for the design decision-making process is the characterization, ranking, and quantification of relevant sources of uncertainties. However, due to the strict time limits of product development loops, the overall computational burden of uncertainty quantification (UQ) may even drive state-of-the-art parallel computing resources to their limits. Efficient machine learning (ML) tools and techniques emphasizing high-fidelity simulation data-driven training will play a fundamental role in enabling UQ in the early-stage development phase.This investigation surveys UQ methods with a focus on noise, vibration, and harshness (NVH) characteristics of transmission systems. Quasi-static 3D contact dynamic simulations are performed to evaluate the static transmission error (TE) of meshing gear pairs under different loading and boundary conditions. TE indicates NVH excitation and is typically used as an objective function in the early-stage design process. The limited system size allows large-scale design of experiments (DoE) and enables numerical studies of various UQ sampling and modeling techniques where the design parameters are treated as random variables associated with tolerances from manufacturing and assembly processes. The model accuracy of generalized polynomial chaos expansion (gPC) and Gaussian process regression (GPR) is evaluated and compared. The results of the methods are discussed to conclude efficient and scalable solution procedures for robust design optimization.
APA, Harvard, Vancouver, ISO, and other styles
8

Perakis, Nikolaos, and Oskar J. Haidn. "Experimental and Numerical Investigation of CH$$_4$$/O$$_2$$ Rocket Combustors." In Notes on Numerical Fluid Mechanics and Multidisciplinary Design, 359–79. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-53847-7_23.

Full text
Abstract:
Abstract The experimental investigation of sub-scale rocket engines gives significant information about the combustion dynamics and wall heat transfer phenomena occurring in full-scale hardware. At the same time, the performed experiments serve as validation test cases for numerical CFD models and for that reason it is vital to obtain accurate experimental data. In the present work, an inverse method is developed able to accurately predict the axial and circumferential heat flux distribution in CH$$_4$$/O$$_2$$ rocket combustors. The obtained profiles are used to deduce information about the injector-injector and injector-flame interactions. Using a 3D CFD simulation of the combustion and heat transfer within a multi-element thrust chamber, the physical phenomena behind the measured heat flux profiles can be inferred. A very good qualitative and quantitative agreement between the experimental measurements and the numerical simulations is achieved.
APA, Harvard, Vancouver, ISO, and other styles
9

Bonfigli, Giuseppe, and Patrick Jenny. "Application of the Multi-Scale-Finite-Volume Method to the Simulation of Incompressible Flows with Immersed Boundaries." In Notes on Numerical Fluid Mechanics and Multidisciplinary Design, 9–16. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-14243-7_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Skripnyak, Vladimir A., Evgeniya G. Skripnyak, and Vladimir V. Skripnyak. "Failure Mechanisms of Alloys with a Bimodal Graine Size Distribution." In Springer Tracts in Mechanical Engineering, 521–34. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60124-9_23.

Full text
Abstract:
AbstractA multi-scale computational approach was used for the investigation of a high strain rate deformation and fracture of magnesium and titanium alloys with a bimodal distribution of grain sizes under dynamic loading. The processes of inelastic deformation and damage of titanium alloys were investigated at the mesoscale level by the numerical simulation method. It was shown that localization of plastic deformation under tension at high strain rates depends on grain size distribution. The critical fracture stress of alloys depends on relative volumes of coarse grains in representative volume. Microcracks nucleation at quasi-static and dynamic loading is associated with strain localization in ultra-fine grained partial volumes. Microcracks arise in the vicinity of coarse and ultrafine grains boundaries. It is revealed that the occurrence of a bimodal grain size distributions causes increased ductility, but decreased tensile strength of UFG alloys. The increase in fine precipitation concentration results not only strengthening but also an increase in ductility of UFG alloys with bimodal grain size distribution.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Multi-Scale numerical methods"

1

Feuillette, C., K. Schmidt, K. Maile, A. Klenk, and E. Roos. "New Concepts for Integrity and Lifetime Assessment of Boiler and Turbine Components for Advanced Ultra-Supercritical Fossil Plants." In AM-EPRI 2010, edited by D. Gandy, J. Shingledecker, and R. Viswanathan, 603–19. ASM International, 2010. http://dx.doi.org/10.31399/asm.cp.am-epri-2010p0603.

Full text
Abstract:
Abstract Advanced ultra-supercritical fossil plants operated at 700/725 °C and up to 350 bars are currently planned to be realized in the next decade. Due to the increase of the steam parameters and the use of new materials e.g. 9-11%Cr steels and nickel based alloys the design of highly loaded components is approaching more and more the classical design limits with regard to critical wall thickness and the related tolerable thermal gradients. To make full use of the strength potential of new boiler materials but also taking into account their specific stress-strain relaxation behavior, new methods are required for reliable integrity analyses and lifetime assessment procedures. Numerical Finite Element (FE) simulation using inelastic constitutive equations offers the possibility of “design by analysis” based on state of the art FE codes and user-defined advanced inelastic material laws. Furthermore material specific damage mechanisms must be considered in such assessments. With regard to component behavior beside aspects of multiaxial loading conditions must be considered as well as the behavior of materials and welded joints in the as-built state. Finally an outlook on the capabilities of new multi-scale approaches to describe material and component behavior will be given.
APA, Harvard, Vancouver, ISO, and other styles
2

Avgerinos, Stavros, and Giovanni Russo. "Numerical methods for multi scale hyperbolic problems, with application to multi-fluid and sedimentation." In INTERNATIONAL CONFERENCE OF NUMERICAL ANALYSIS AND APPLIED MATHEMATICS (ICNAAM 2017). Author(s), 2018. http://dx.doi.org/10.1063/1.5043768.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Qin, Guan, Bin Gong, Linfeng Bi, and Xiao-hui Wu. "Multi-scale and Multi-physics Methods for Numerical Modeling of Fluid Flow in Fractured Formations." In SPE EUROPEC/EAGE Annual Conference and Exhibition. Society of Petroleum Engineers, 2011. http://dx.doi.org/10.2118/143590-ms.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yu, Hao Nan, Juan Du, Zhao Yi Ye, Li Ye Mei, Sheng Yu Huang, Wei Yang, and Chuan Xu. "Context-Awareness Network with Multi-Level Feature Fusion for Building Change Detection." In The 6th International Conference on Numerical Modelling in Engineering. Switzerland: Trans Tech Publications Ltd, 2024. http://dx.doi.org/10.4028/p-rgow4x.

Full text
Abstract:
Building change detection is critical for urban management. Deep learning methods are more discriminatory and learnable than traditional change detection methods. But in complicated backdrop environments, it is still difficult to precisely pinpoint change zones of interest. Most change detection networks suffer from inaccurate feature characterization during feature extraction and fusion. As a solution to these problems, we propose the use of multilevel feature fusion in conjunction with aware networks to detect building changes. To obtain multi-scale change characteristics, our Context-awareness network employs multi-scale patch embedding. Followed by multi-path Transformers to enhance learning and extract more suitable features. The multi-scale fusion module can ensure semantic consistency of change features, making detected change regions more accurate. Visual comparisons and quantitative evaluations of our method showed that it outperformed seven popular change detection methods on the LEVIR-CD dataset.
APA, Harvard, Vancouver, ISO, and other styles
5

Kazmer, David O., Stephen P. Johnston, Mary E. Moriarty, and Christopher Santeufemio. "Passive Multi-Scale Alignment." In ASME 2011 International Mechanical Engineering Congress and Exposition. ASMEDC, 2011. http://dx.doi.org/10.1115/imece2011-62320.

Full text
Abstract:
Methods are presented for self-alignment and assembly of objects with micron and nanometer-level features. The approach is a combination of kinematic coupling and elastic averaging in which mating alignment features spanning multiple length scales are successively brought into contact. When the objects are pressed together, the larger alignment features cause necessary deformation to ensure adequate alignment at the smaller length scales. Analytical and numerical modeling indicate that the largest alignment features can be designed to generally resolve global systematic errors while the smaller alignment features can correct local errors to achieve sub-micron alignment. Physical realization with ion beam etching, deposition, and thermal imprint lithography are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, H., X. G. Fan, H. W. Li, H. Li, M. Zhan, Z. C. Sun, L. G. Guo, and Y. L. Liu. "Multi-scale through-process modeling and simulation in precision forming of complex components of difficult-to-deform material." In THE 11TH INTERNATIONAL CONFERENCE ON NUMERICAL METHODS IN INDUSTRIAL FORMING PROCESSES: NUMIFORM 2013. AIP, 2013. http://dx.doi.org/10.1063/1.4806819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mota, Alejandro, Irina Tezaur, Daria Koliesnikova, and Jonathan Hoy. "The Schwarz Alternating Method for Multi-Scale Contact Mechanics." In Proposed for presentation at the Congress on Numerical Methods in Engineering (CMN) 2022 held September 12-14, 2022 in Las Palmas de Gran Canaria, Spain. US DOE, 2022. http://dx.doi.org/10.2172/2004375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Wing Kam. "A Multi-scale Simulation of Micro-forming Process with RKEM." In MATERIALS PROCESSING AND DESIGN: Modeling, Simulation and Applications - NUMIFORM 2004 - Proceedings of the 8th International Conference on Numerical Methods in Industrial Forming Processes. AIP, 2004. http://dx.doi.org/10.1063/1.1766508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Zhaolong, Yongzhong Zhang, Jin Wu, Lei Zhang, Deng Feng, Peng Zhou, Qiang Ge, and Mingzhe Song. "Multi-Scale Fracture Prediction and Modeling." In International Petroleum Technology Conference. IPTC, 2025. https://doi.org/10.2523/iptc-24989-ms.

Full text
Abstract:
Abstract Fractured reservoir is one of the critical reservoir types, and the accurate characterization and prediction of fractures are essential for the high-efficiency development of oil fields. Due to the complexity and multiple solutions, fractures pose significant challenges to quantitative predictions. By integrating outcrop, seismic and core research methods, this paper analyze fracture characteristics such as length, strike and aperture, to determine the genesis, scale, and period of multi-scale fractures. Subsequently, we employ the finite element numerical simulation method to build a geomechanics model, which can be used to quantitatively simulate the aperture, density, and other parameters of fractures. Additionally, statistical methods are employed to provide a quantitative description of formation, fault, and fracture parameters. A multi-scale discrete fracture network model is then constructed using single-well fracture data, while considering multiple constraints such as geomechanics, curvature, and distance to fault. The geomechanics-based multi-scale fracture modeling method simulates fracture development under various conditions and constraints, considering fracture genesis from a comprehensive perspective. This approach allows for more systematic identification of fractures and more accurate predictions, leading to improve prediction accuracy. The method provides valuable guidance for well deployment, optimizing gas recovery rates, and enhancing recovery. It also serves as a basis for later engineering modifications, and has significant research implications for the gas development.
APA, Harvard, Vancouver, ISO, and other styles
10

Raghavan, Prasanna. "Multi-Scale Model for Damage Analysis in Fiber-Reinforced Composites With Debonding." In MATERIALS PROCESSING AND DESIGN: Modeling, Simulation and Applications - NUMIFORM 2004 - Proceedings of the 8th International Conference on Numerical Methods in Industrial Forming Processes. AIP, 2004. http://dx.doi.org/10.1063/1.1766812.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Multi-Scale numerical methods"

1

Cai, Wei. Multi-scale and Multi-physics Numerical Methods for Modeling Transport in Mesoscopic Systems. Fort Belvoir, VA: Defense Technical Information Center, September 2012. http://dx.doi.org/10.21236/ada572398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cai, Wei. Multi-scale and Multi-physics Numerical Methods for Modeling Transport in Mesoscopic Systems. Fort Belvoir, VA: Defense Technical Information Center, October 2014. http://dx.doi.org/10.21236/ada617374.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ayoul-Guilmard, Q., F. Nobile, S. Ganesh, M. Nuñez, R. Tosi, C. Soriano, and R. Rosi. D5.5 Report on the application of multi-level Monte Carlo to wind engineering. Scipedia, 2022. http://dx.doi.org/10.23967/exaqute.2022.3.03.

Full text
Abstract:
We study the use of multi-level Monte Carlo methods for wind engineering. This report brings together methodological research on uncertainty quantification and work on target applications of the ExaQUte project in wind and civil engineering. First, a multi-level Monte Carlo for the estimation of the conditional value at risk and an adaptive algorithm are presented. Their reliability and performance are shown on the time-average of a non-linear oscillator and on the lift coefficient of an airfoil, with both preset and adaptively refined meshes. Then, we propose an adaptive multi-fidelity Monte Carlo algorithm for turbulent fluid flows where multilevel Monte Carlo methods were found to be inefficient. Its efficiency is studied and demonstrated on the benchmark problem of quantifying the uncertainty on the drag force of a tall building under random turbulent wind conditions. All numerical experiments showcase the open-source software stack of the ExaQUte project for large-scale computing in a distributed environment.
APA, Harvard, Vancouver, ISO, and other styles
4

Dinovitzer. L52303 Development of Techniques to Assess the Long-Term Integrity of Wrinkled Pipeline. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), September 2009. http://dx.doi.org/10.55274/r0010332.

Full text
Abstract:
The objective of the project was to develop a numerical model than may be used to predict the wrinkle formation and post formation behavior of a pipeline considering the effect of soil confinement and define the specifications for the development of a comprehensive wrinkle integrity assessment process. The result of this research is the development of wrinkle assessment techniques that could be used directly or could be used to codify maintenance guidelines. This project focused specifically on the pipe soil interaction modeling wrinkle formation as a result of the relative movement of the pipe and soil. The structural model developed and validated in this program and previous work could be applied to wrinkle bends, however, this issue is not specifically addressed in this report. In addition, the project development efforts focused on the monotonic soil interaction event of idealized (e.g., no secondary degradation like corrosion features) pipe segments. The project completed a critical review of existing structural and soil modelling techniques to identify the most suitable technologies for this application. The soil-pipe interaction under soil movement was found to be best represented using the LS-DYNA Multi-material Eulerian technique which permitted the application of a number of suitable soil constitutive models. This analysis tool permitted the consideration of a range of soil types and large soil displacements. Having defined the most suitable tool set, several pipe soil interaction models were developed. These models were used to illustrate the types of analyses that could be completed and the capabilities of the models to illustrate the sensitivity of the scenario loads, displacements to changes in soil, pipe and other parameters. The modeling results were discussed to demonstrate that their trends and results were in line with intuitive assumptions and engineering judgment. Additional models were developed to simulate large scale pipe-soil interaction laboratory test programs. The results of the simulated test programs were compared with the laboratory results as an initial validation of the modeling techniques and tools. The simulated soil displacement patterns, pipe strains and pipe displacement were shown to agree well with experimental results and as such illustrated the ability of the models to reproduce idealized pipe-soil interaction events. Full-scale soil displacement events were modeled to illustrate the application of the modeling tools to forecast or predict the effects of axial and transverse soil movements on buried pipeline segments. These results were used to illustrate the methods and assumptions inherent in the application of the modeling tools to predict soil loading on pipeline systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak, and Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, July 1996. http://dx.doi.org/10.32747/1996.7613033.bard.

Full text
Abstract:
The objectives of this project were to develop procedures and models, based on neural networks, for quality sorting of agricultural produce. Two research teams, one in Purdue University and the other in Israel, coordinated their research efforts on different aspects of each objective utilizing both melons and tomatoes as case studies. At Purdue: An expert system was developed to measure variances in human grading. Data were acquired from eight sensors: vision, two firmness sensors (destructive and nondestructive), chlorophyll from fluorescence, color sensor, electronic sniffer for odor detection, refractometer and a scale (mass). Data were analyzed and provided input for five classification models. Chlorophyll from fluorescence was found to give the best estimation for ripeness stage while the combination of machine vision and firmness from impact performed best for quality sorting. A new algorithm was developed to estimate and minimize training size for supervised classification. A new criteria was established to choose a training set such that a recurrent auto-associative memory neural network is stabilized. Moreover, this method provides for rapid and accurate updating of the classifier over growing seasons, production environments and cultivars. Different classification approaches (parametric and non-parametric) for grading were examined. Statistical methods were found to be as accurate as neural networks in grading. Classification models by voting did not enhance the classification significantly. A hybrid model that incorporated heuristic rules and either a numerical classifier or neural network was found to be superior in classification accuracy with half the required processing of solely the numerical classifier or neural network. In Israel: A multi-sensing approach utilizing non-destructive sensors was developed. Shape, color, stem identification, surface defects and bruises were measured using a color image processing system. Flavor parameters (sugar, acidity, volatiles) and ripeness were measured using a near-infrared system and an electronic sniffer. Mechanical properties were measured using three sensors: drop impact, resonance frequency and cyclic deformation. Classification algorithms for quality sorting of fruit based on multi-sensory data were developed and implemented. The algorithms included a dynamic artificial neural network, a back propagation neural network and multiple linear regression. Results indicated that classification based on multiple sensors may be applied in real-time sorting and can improve overall classification. Advanced image processing algorithms were developed for shape determination, bruise and stem identification and general color and color homogeneity. An unsupervised method was developed to extract necessary vision features. The primary advantage of the algorithms developed is their ability to learn to determine the visual quality of almost any fruit or vegetable with no need for specific modification and no a-priori knowledge. Moreover, since there is no assumption as to the type of blemish to be characterized, the algorithm is capable of distinguishing between stems and bruises. This enables sorting of fruit without knowing the fruits' orientation. A new algorithm for on-line clustering of data was developed. The algorithm's adaptability is designed to overcome some of the difficulties encountered when incrementally clustering sparse data and preserves information even with memory constraints. Large quantities of data (many images) of high dimensionality (due to multiple sensors) and new information arriving incrementally (a function of the temporal dynamics of any natural process) can now be processed. Furhermore, since the learning is done on-line, it can be implemented in real-time. The methodology developed was tested to determine external quality of tomatoes based on visual information. An improved model for color sorting which is stable and does not require recalibration for each season was developed for color determination. Excellent classification results were obtained for both color and firmness classification. Results indicted that maturity classification can be obtained using a drop-impact and a vision sensor in order to predict the storability and marketing of harvested fruits. In conclusion: We have been able to define quantitatively the critical parameters in the quality sorting and grading of both fresh market cantaloupes and tomatoes. We have been able to accomplish this using nondestructive measurements and in a manner consistent with expert human grading and in accordance with market acceptance. This research constructed and used large databases of both commodities, for comparative evaluation and optimization of expert system, statistical and/or neural network models. The models developed in this research were successfully tested, and should be applicable to a wide range of other fruits and vegetables. These findings are valuable for the development of on-line grading and sorting of agricultural produce through the incorporation of multiple measurement inputs that rapidly define quality in an automated manner, and in a manner consistent with the human graders and inspectors.
APA, Harvard, Vancouver, ISO, and other styles
6

Bray, Jonathan, Ross Boulanger, Misko Cubrinovski, Kohji Tokimatsu, Steven Kramer, Thomas O'Rourke, Ellen Rathje, Russell Green, Peter Robertson, and Christine Beyzaei. U.S.—New Zealand— Japan International Workshop, Liquefaction-Induced Ground Movement Effects, University of California, Berkeley, California, 2-4 November 2016. Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA, March 2017. http://dx.doi.org/10.55461/gzzx9906.

Full text
Abstract:
There is much to learn from the recent New Zealand and Japan earthquakes. These earthquakes produced differing levels of liquefaction-induced ground movements that damaged buildings, bridges, and buried utilities. Along with the often spectacular observations of infrastructure damage, there were many cases where well-built facilities located in areas of liquefaction-induced ground failure were not damaged. Researchers are working on characterizing and learning from these observations of both poor and good performance. The “Liquefaction-Induced Ground Movements Effects” workshop provided an opportunity to take advantage of recent research investments following these earthquake events to develop a path forward for an integrated understanding of how infrastructure performs with various levels of liquefaction. Fifty-five researchers in the field, two-thirds from the U.S. and one-third from New Zealand and Japan, convened in Berkeley, California, in November 2016. The objective of the workshop was to identify research thrusts offering the greatest potential for advancing our capabilities for understanding, evaluating, and mitigating the effects of liquefaction-induced ground movements on structures and lifelines. The workshop also advanced the development of younger researchers by identifying promising research opportunities and approaches, and promoting future collaborations among participants. During the workshop, participants identified five cross-cutting research priorities that need to be addressed to advance our scientific understanding of and engineering procedures for soil liquefaction effects during earthquakes. Accordingly, this report was organized to address five research themes: (1) case history data; (2) integrated site characterization; (3) numerical analysis; (4) challenging soils; and (5) effects and mitigation of liquefaction in the built environment and communities. These research themes provide an integrated approach toward transformative advances in addressing liquefaction hazards worldwide. The archival documentation of liquefaction case history datasets in electronic data repositories for use by the broader research community is critical to accelerating advances in liquefaction research. Many of the available liquefaction case history datasets are not fully documented, published, or shared. Developing and sharing well-documented liquefaction datasets reflect significant research efforts. Therefore, datasets should be published with a permanent DOI, with appropriate citation language for proper acknowledgment in publications that use the data. Integrated site characterization procedures that incorporate qualitative geologic information about the soil deposits at a site and the quantitative information from in situ and laboratory engineering tests of these soils are essential for quantifying and minimizing the uncertainties associated site characterization. Such information is vitally important to help identify potential failure modes and guide in situ testing. At the site scale, one potential way to do this is to use proxies for depositional environments. At the fabric and microstructure scale, the use of multiple in situ tests that induce different levels of strain should be used to characterize soil properties. The development of new in situ testing tools and methods that are more sensitive to soil fabric and microstructure should be continued. The development of robust, validated analytical procedures for evaluating the effects of liquefaction on civil infrastructure persists as a critical research topic. Robust validated analytical procedures would translate into more reliable evaluations of critical civil infrastructure iv performance, support the development of mechanics-based, practice-oriented engineering models, help eliminate suspected biases in our current engineering practices, and facilitate greater integration with structural, hydraulic, and wind engineering analysis capabilities for addressing multi-hazard problems. Effective collaboration across countries and disciplines is essential for developing analytical procedures that are robust across the full spectrum of geologic, infrastructure, and natural hazard loading conditions encountered in practice There are soils that are challenging to characterize, to model, and to evaluate, because their responses differ significantly from those of clean sands: they cannot be sampled and tested effectively using existing procedures, their properties cannot be estimated confidently using existing in situ testing methods, or constitutive models to describe their responses have not yet been developed or validated. Challenging soils include but are not limited to: interbedded soil deposits, intermediate (silty) soils, mine tailings, gravelly soils, crushable soils, aged soils, and cemented soils. New field and laboratory test procedures are required to characterize the responses of these materials to earthquake loadings, physical experiments are required to explore mechanisms, and new soil constitutive models tailored to describe the behavior of such soils are required. Well-documented case histories involving challenging soils where both the poor and good performance of engineered systems are documented are also of high priority. Characterizing and mitigating the effects of liquefaction on the built environment requires understanding its components and interactions as a system, including residential housing, commercial and industrial buildings, public buildings and facilities, and spatially distributed infrastructure, such as electric power, gas and liquid fuel, telecommunication, transportation, water supply, wastewater conveyance/treatment, and flood protection systems. Research to improve the characterization and mitigation of liquefaction effects on the built environment is essential for achieving resiliency. For example, the complex mechanisms of ground deformation caused by liquefaction and building response need to be clarified and the potential bias and dispersion in practice-oriented procedures for quantifying building response to liquefaction need to be quantified. Component-focused and system-performance research on lifeline response to liquefaction is required. Research on component behavior can be advanced by numerical simulations in combination with centrifuge and large-scale soil–structure interaction testing. System response requires advanced network analysis that accounts for the propagation of uncertainty in assessing the effects of liquefaction on large, geographically distributed systems. Lastly, research on liquefaction mitigation strategies, including aspects of ground improvement, structural modification, system health monitoring, and rapid recovery planning, is needed to identify the most effective, cost-efficient, and sustainable measures to improve the response and resiliency of the built environment.
APA, Harvard, Vancouver, ISO, and other styles
7

Mazzoni, Silvia, Nicholas Gregor, Linda Al Atik, Yousef Bozorgnia, David Welch, and Gregory Deierlein. Probabilistic Seismic Hazard Analysis and Selecting and Scaling of Ground-Motion Records (PEER-CEA Project). Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA, November 2020. http://dx.doi.org/10.55461/zjdn7385.

Full text
Abstract:
This report is one of a series of reports documenting the methods and findings of a multi-year, multi-disciplinary project coordinated by the Pacific Earthquake Engineering Research Center (PEER) and funded by the California Earthquake Authority (CEA). The overall project is titled “Quantifying the Performance of Retrofit of Cripple Walls and Sill Anchorage in Single-Family Wood-Frame Buildings,” henceforth referred to as the “PEER–CEA Project.” The overall objective of the PEER–CEA Project is to provide scientifically based information (e.g., testing, analysis, and resulting loss models) that measure and assess the effectiveness of seismic retrofit to reduce the risk of damage and associated losses (repair costs) of wood-frame houses with cripple wall and sill anchorage deficiencies as well as retrofitted conditions that address those deficiencies. Tasks that support and inform the loss-modeling effort are: (1) collecting and summarizing existing information and results of previous research on the performance of wood-frame houses; (2) identifying construction features to characterize alternative variants of wood-frame houses; (3) characterizing earthquake hazard and ground motions at representative sites in California; (4) developing cyclic loading protocols and conducting laboratory tests of cripple wall panels, wood-frame wall subassemblies, and sill anchorages to measure and document their response (strength and stiffness) under cyclic loading; and (5) the computer modeling, simulations, and the development of loss models as informed by a workshop with claims adjustors. This report is a product of Working Group 3 (WG3), Task 3.1: Selecting and Scaling Ground-motion records. The objective of Task 3.1 is to provide suites of ground motions to be used by other working groups (WGs), especially Working Group 5: Analytical Modeling (WG5) for Simulation Studies. The ground motions used in the numerical simulations are intended to represent seismic hazard at the building site. The seismic hazard is dependent on the location of the site relative to seismic sources, the characteristics of the seismic sources in the region and the local soil conditions at the site. To achieve a proper representation of hazard across the State of California, ten sites were selected, and a site-specific probabilistic seismic hazard analysis (PSHA) was performed at each of these sites for both a soft soil (Vs30 = 270 m/sec) and a stiff soil (Vs30=760 m/sec). The PSHA used the UCERF3 seismic source model, which represents the latest seismic source model adopted by the USGS [2013] and NGA-West2 ground-motion models. The PSHA was carried out for structural periods ranging from 0.01 to 10 sec. At each site and soil class, the results from the PSHA—hazard curves, hazard deaggregation, and uniform-hazard spectra (UHS)—were extracted for a series of ten return periods, prescribed by WG5 and WG6, ranging from 15.5–2500 years. For each case (site, soil class, and return period), the UHS was used as the target spectrum for selection and modification of a suite of ground motions. Additionally, another set of target spectra based on “Conditional Spectra” (CS), which are more realistic than UHS, was developed [Baker and Lee 2018]. The Conditional Spectra are defined by the median (Conditional Mean Spectrum) and a period-dependent variance. A suite of at least 40 record pairs (horizontal) were selected and modified for each return period and target-spectrum type. Thus, for each ground-motion suite, 40 or more record pairs were selected using the deaggregation of the hazard, resulting in more than 200 record pairs per target-spectrum type at each site. The suites contained more than 40 records in case some were rejected by the modelers due to secondary characteristics; however, none were rejected, and the complete set was used. For the case of UHS as the target spectrum, the selected motions were modified (scaled) such that the average of the median spectrum (RotD50) [Boore 2010] of the ground-motion pairs follow the target spectrum closely within the period range of interest to the analysts. In communications with WG5 researchers, for ground-motion (time histories, or time series) selection and modification, a period range between 0.01–2.0 sec was selected for this specific application for the project. The duration metrics and pulse characteristics of the records were also used in the final selection of ground motions. The damping ratio for the PSHA and ground-motion target spectra was set to 5%, which is standard practice in engineering applications. For the cases where the CS was used as the target spectrum, the ground-motion suites were selected and scaled using a modified version of the conditional spectrum ground-motion selection tool (CS-GMS tool) developed by Baker and Lee [2018]. This tool selects and scales a suite of ground motions to meet both the median and the user-defined variability. This variability is defined by the relationship developed by Baker and Jayaram [2008]. The computation of CS requires a structural period for the conditional model. In collaboration with WG5 researchers, a conditioning period of 0.25 sec was selected as a representative of the fundamental mode of vibration of the buildings of interest in this study. Working Group 5 carried out a sensitivity analysis of using other conditioning periods, and the results and discussion of selection of conditioning period are reported in Section 4 of the WG5 PEER report entitled Technical Background Report for Structural Analysis and Performance Assessment. The WG3.1 report presents a summary of the selected sites, the seismic-source characterization model, and the ground-motion characterization model used in the PSHA, followed by selection and modification of suites of ground motions. The Record Sequence Number (RSN) and the associated scale factors are tabulated in the Appendices of this report, and the actual time-series files can be downloaded from the PEER Ground-motion database Portal (https://ngawest2.berkeley.edu/)(link is external).
APA, Harvard, Vancouver, ISO, and other styles
8

A SIMPLE METHOD FOR A RELIABLE MODELLING OF THE NONLINEAR BEHAVIOUR OF BOLTED CONNECTIONS IN STEEL LATTICE TOWERS. The Hong Kong Institute of Steel Construction, March 2022. http://dx.doi.org/10.18057/ijasc.2022.18.1.6.

Full text
Abstract:
The behaviour of bolted connections in steel lattice transmission line towers affects their load-bearing capacity and failure mode. Bolted connections are commonly modelled as pinned or fixed joints, but their behaviour lies between these two extremes and evolves in a nonlinear manner. Accordingly, an accurate finite element modelling of the structural response of complete steel lattice towers requires the consideration of various nonlinear phenomena involved in bolted connexions, such as bolt slippage. In this study, a practical method is proposed for the modelling of the nonlinear response of steel lattice tower connections involving one or multiple bolts. First, the local load-deformation behaviour of single-bolt lap connections is evaluated analytically depending on various geometric and material parameters and construction details. Then, the predicted nonlinear behaviour for a given configuration serves as an input to a 2D/3D numerical model of the entire assembly of plates in which the bolted joints are represented as discrete elements. For comparison purposes, an extensive experimental study comprising forty-four tests were conducted on steel plates assembled with one or two bolts. This approach is also extended to simulate the behaviour of assemblies including four bolts and the obtained results are checked against experimental datasets from the literature. The obtained results show that the proposed method can predict accurately the response of a variety of multi-bolt connections. A potential application of the strategy developed in this paper could be in the numerical modelling of full-scale steel lattice towers, particularly for a reliable estimation of the displacements.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography
We use cookies to improve our website's functionality. Learn more