Dissertations / Theses on the topic 'Models of rupture (MOR)'

To see the other types of publications on this topic, follow the link: Models of rupture (MOR).

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 43 dissertations / theses for your research on the topic 'Models of rupture (MOR).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Crabbé, Blandine. "Gradient damage models in large deformation." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLX085/document.

Full text
Abstract:
Les modèles d'endommagement à gradient, aussi dénommés modèles à champs de phases, sont désormais largement utilisés pour modéliser la rupture fragile et ductile, depuis l'initiation de l'endommagement jusqu'à la propagation d'une fissure. Cependant, la majorité des études disponibles dans la littérature ne concerne que le cadre des petites déformations, et très peu d'études poussées ont été menées afin d'étudier leur pertinence dans un contexte de grandes déformations. Ce serait pourtant d'un intérêt primordial, notamment pour l'industrie pneumatique, qui deviendrait alors capable de prédire plus précisément l'initiation de l'endommagement dans ses structures.Dans la première partie de ce travail, nous établissons des solutions analytiques d'évolution de l'endommagement (homogène et localisée) pour des matériaux visqueux, en petites et en grandes déformations. En petites déformations, les modèles rhéologiques de Maxwell et Poynting-Thomson sont étudiés, et en grandes déformations, les modèles de Maxwell et Zener sont choisis. Une étude sur l'évolution de l'endommagement dans un cas purement hyperélastique est aussi menée.A cette première partie analytique succède une partie numérique, qui détaille l'implémentation des modèles d'endommagement à gradient dans des codes éléments finis en grandes déformations. De même qu'en petites déformations, une stratégie de minimisation alternée est adoptée pour résoudre successivement les problèmes d'endommagement et de déplacement. Le matériau suit une loi de Mooney-Rivlin quasi-incompressible, et une méthode mixte en déplacement-pression est utilisée. Des tests en 2D et 3D sont effectués, qui mettent en évidence la capacité des modèles à initier de l'endommagement en grandes déformations.Les modèles d'endommagement utilisés pour la seconde partie ne sont cependant capables d'initier de l'endommagement que dans les zones où la déformation est importante, c'est-à-dire dans les zones de forte contrainte déviatorique. Il a toutefois été montré que certains matériaux polymères, quasi-incompressibles, s'endommagent dans les zones de forte pression hydrostatique. Par conséquent, la recherche et l'étude d'un modèle d'endommagement capable d'initier de l'endommagement dans les zones de forte pression, pour des matériaux quasi-incompressibles lorsqu'ils sont sains, fait l'objet d'une troisième partie.Enfin, la croissance brusque de cavités dans un matériau hyperélastique, appelée phénomène de cavitation, est étudiée, ainsi que son interaction avec l'endommagement. Dans un premier temps, nous considérons la cavitation comme une simple bifurcation hyperélastique d'un matériau néo-hookéen compressible isotrope, et déterminons l'expression analytique de l'élongation critique pour laquelle la cavitation fait son apparition. Dans un second temps, nous montrons qu'il y a une compétition entre la cavitation et l'endommagement, et qu'en fonction de la valeur du ratio des élongations critiques respectives pour chaque phénomène, deux types de rupture apparaissent
Gradient damage models, also known as phase-field models, are now widely used to model brittle and ductile fracture, from the onset of damage to the propagation of a crack in various materials. Yet, they have been mainly studied in the framework of small deformation, and very few studies aims at proving their relevance in a finite deformation framework. This would be more helpful for the tyre industry that deals with very large deformation problems, and has to gain insight into the prediction of the initiation of damage in its structures.The first part of this work places emphasis on finding analytical solutions to unidimensional problems of damaging viscous materials in small and large deformation.In all the cases, the evolution of damage is studied, both in the homogeneous and localised cases. Having such solutions gives a suitable basis to implement these models and validate the numerical results.A numerical part naturally follows the first one, that details the specificities of the numerical implementation of these non local models in large deformation. In order to solve the displacement and damage problems, the strategy of alternate minimisation (or staggered algorithm) is used. When solved on the reference configuration, the damage problem is the same as in small deformation, and consists in a bound constraint minimisation. The displacement problem is non linear, and a mixed finite element method is used to solve a displacement-pressure problem. A quasi-incompressible Mooney-Rivlin law is used to model the behaviour of the hyperelastic material. Various tests in 2D and 3D are performed to show that gradient damage models are perfectly able to initiate damage in sound, quasi-incompressible structures, in large deformation.In the simulations depicted above, it should be noted that the damage laws combined to the hyperelastic potential results in an initiation of damage that takes place in zones of high deformation, or in other words, in zones of high deviatoric stress. However, in some polymer materials, that are known to be quasi-incompressible, it has been shown that the initiation of damage can take place in zones of high hydrostatic pressure. This is why an important aspect of the work consists in establishing a damage law such that the material be incompressible when there is no damage, and the pressure play a role in the damage criterion. Such a model is exposed in the third part.Finally, the last part focuses on the cavitation phenomenon, that can be understood as the sudden growth of a cavity. We first study it as a purely hyperelastic bifurcation, in order to get the analytical value of the critical elongation for which cavitation occurs, in the case of a compressible isotropic neo-hookean material submitted to a radial displacement. We show that there is a competition between the cavitation phenomenon and the damage, and that depending on the ratio of the critical elongation for damage and the critical elongation for cavitation, different rupture patterns can appear
APA, Harvard, Vancouver, ISO, and other styles
2

Pulido, Nelson. "Constraints for Dynamic Models of the Rupture from Kinematic Source Inversion." 京都大学 (Kyoto University), 2000. http://hdl.handle.net/2433/181128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cirella, Antonella <1977&gt. "Joint inversion of GPS and strong motion data for earthquake rupture models." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/865/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hamesse, Charles. "Simultaneous Measurement Imputation and Rehabilitation Outcome Prediction for Achilles Tendon Rupture." Thesis, KTH, Robotik, perception och lärande, RPL, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231485.

Full text
Abstract:
Achilles tendonbrott (Achilles Tendon Rupture, ATR) är en av de typiska mjukvävnadsskadorna. Rehabilitering efter sådana muskuloskeletala skador förblir en långvarig process med ett mycket variet resultat. Att kunna förutsäga rehabiliteringsresultat exakt är avgörande för beslutsfattande stöduppdrag. I detta arbete designar vi en probabilistisk modell för att förutse rehabiliteringsresultat för ATR med hjälp av en klinisk kohort med många saknade poster. Vår modell är tränad från början till slutet för att samtidigt förutsäga de saknade inmatningarna och rehabiliteringsresultat. Vi utvärderar vår modell och jämför med flera baslinjer, inklusive flerstegsmetoder. Experimentella resultat visar överlägsenheten hos vår modell över dessa flerstadiga tillvägagångssätt med olika dataimuleringsmetoder för ATR rehabiliterings utfalls prognos.
Achilles Tendon Rupture (ATR) is one of the typical soft tissue injuries. Rehabilitation after such musculoskeletal injuries remains a prolonged process with a very variable outcome. Being able to predict the rehabilitation outcome accurately is crucial for treatment decision support. In this work, we design a probabilistic model to predict the rehabilitation outcome for ATR using a clinical cohort with numerous missing entries. Our model is trained end-to-end in order to simultaneously predict the missing entries and the rehabilitation outcome. We evaluate our model and compare with multiple baselines, including multi-stage methods. Experimental results demonstrate the superiority of our model over these baseline multi-stage approaches with various data imputation methods for ATR rehabilitation outcome prediction.
APA, Harvard, Vancouver, ISO, and other styles
5

Azizipesteh, Baglo Hamid Reza. "Effect of various mix parameters on the true tensile strength of concrete." Thesis, Loughborough University, 2013. https://dspace.lboro.ac.uk/2134/12560.

Full text
Abstract:
The primary aim of this research was to develop a method for determining the true uniaxial tensile strength of concrete by conducting a series of cylinder splitting, modulus of rupture (MOR) and cylinder/cube compression tests. The main objectives were: • Critically reviewing previous published research in order to identify gaps in current knowledge and understanding, including theoretical and methodological contributions to the true uniaxial tensile strength of concrete. In order to maintain consistency and increase the reliability of the proposed methods, it is essential to review the literature to provide additional data points in order to add additional depth, breathe and rigor to Senussi's investigation (2004). • The design of self compacting concrete (SCC), normal strength concrete (NSC) and high strength concrete (HSC) mixes and undertaking lab-based experimental works for mixing, casting, curing and testing of specimens in order to establish new empirical evidence and data. • Analysing the data, presenting the results, and investigating the application of validity methods as stated by Lin and Raoof (1999) and Senussi (2004). • To draw conclusions including comparison with previous research and literature, including the proposal of new correction factors and recommendations for future research. 29 batches of NSC, 137 batches of HSC, 44 batches of fly ash SCC and 47 batches of GGBS SCC were cast and their hardened and fresh properties were measured. Hardened properties measured included: cylinder splitting strength, MOR, cylinder compressive strength and cube compressive strength. A variety of rheological tests were also applied to characterise the fresh properties of the SCC mixes, including: slump flow, T50, L-box, V-funnel, J-ring and sieve stability. Cylinders were also visually checked after splitting for segregation. The tensile strength of concrete has traditionally been expressed in terms of its compressive strength (e.g. ft = c x c f ). Based on this premise, extensive laboratory testing was conducted to evaluate the tensile strength of the concretes, including the direct tension test and the indirect cylinder splitting and MOR tests. These tests however, do not provide sufficiently accurate results for the true uniaxial tensile strength, due to the results being based upon different test methods. This shortcoming has been overcome by recently developed methods reported by Lin and Raoof (1999) and Senussi (2004) who proposed simple correction factors for the application to the cylinder splitting and MOR test results, with the final outcome providing practically reasonable estimates of the true uniaxial tensile strength of concrete, covering a wide range of concrete compressive strengths 12.57 ≤ fc ≤ 93.82 MPa, as well as a wide range of aggregate types. The current investigation has covered a wide range of ages at testing, from 3 to 91 days. Test data from other sources has also been applied for ages up to 365 days, with the test results reported relating to a variety of mix designs. NSC, SCC and HSC data from the current investigation has shown an encouraging correlation with the previously reported results, hence providing additional wider and deeper empirical evidence for the validity of the recommended correction factors. The results have also demonstrated that the type (size, texture and strength) of aggregate has a negligible effect on the recommended correction factors. The concrete age at testing was demonstrated to have a potentially significant effect on the recommended correction factors. Altering the cement type can also have a significant effect on the hardened properties measured and demonstrated practically noticeable variations on the recommended correction factors. The correction factors proved to be valid regarding the effects of incorporating various blended cements in the HSC and SCC. The NSC, HSC and SCC showed an encouraging correlation with previously reported results, providing additional support, depth, breadth and rigor for the validity of the correction factors recommended.
APA, Harvard, Vancouver, ISO, and other styles
6

Mikami, Naoya. "Source Processes and Dynamic Rupture Models of Three Inland Earthquakes in the Northwestern Chubu District, Central Honshu Japan." 京都大学 (Kyoto University), 1992. http://hdl.handle.net/2433/168831.

Full text
Abstract:
本文データは平成22年度国立国会図書館の学位論文(博士)のデジタル化実施により作成された画像ファイルを基にpdf変換したものである
Kyoto University (京都大学)
0048
新制・論文博士
博士(理学)
乙第7854号
論理博第1177号
新制||理||784(附属図書館)
UT51-92-K354
(主査)教授 尾池 和夫, 教授 安藤 雅孝, 教授 入倉 孝次郎
学位規則第4条第2項該当
APA, Harvard, Vancouver, ISO, and other styles
7

Ragon, Théa. "Études des incertitudes dans l’imagerie de la rupture sismique." Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4023.

Full text
Abstract:
Les séismes rompent des failles inaccessibles situées à plusieurs kilomètres sous la surface terrestre. Alors, comment les comprendre puisque l’on ne peut les observer directement ? Notre seul recours est d’ausculter le globe terrestre depuis sa surface. Hélas, la quantité, la variété et la qualité de ces observations superficielles sont insuffisantes pour caractériser ces phénomènes complexes, et l’image des séismes que nous dressons est altérée par de nombreuses approximations. Ces approximations sont souvent nécessaires sous peine de ne pouvoir modéliser les séismes. Elles dérivent des limites de nos observations, de notre connaissance sommaire de la physique du globe terrestre, et des procédures de modélisation choisies. L’imperfection de nos modèles questionne notre capacité à reproduire la réalité de la rupture sismique, et peut remettre en cause notre compréhension des mécanismes qui la régissent. Notre quête de réalisme ne peut donc que s’accompagner d’une analyse exhaustive et détaillée des incertitudes qui corrompent potentiellement l’imagerie des séismes, ainsi que nos interprétations. Bien que les erreurs de mesures soient généralement connues et prises en compte, les autres approximations sont souvent négligées. Dans cette thèse, nous montrons que le manque de réalisme de notre description de l’intérieur du globe influence les modèles de séismes, surtout lorsque ces derniers ont une magnitude importante. Nous nous intéressons particulièrement à deux sources d’approximations majeures : l’architecture des failles, et la complexité temporelle des processus en jeu sur celles-ci. Deux méthodes sont présentées, qui permettent d’estimer, et de prendre en compte, les incertitudes dérivant de ces approximations dans les procédures de modélisation. Nous montrons que l’introduction de telles incertitudes, et d'autres incertitudes potentielles, est nécessaire pour imager des modèles de séismes fiables et réalistes. Nos résultats s’appuient sur l’utilisation de méthodes d’imagerie probabilistes, qui permettent d’analyser la diversité et l’hétérogénéité des modèles possibles. Nous montrons qu’une nouvelle génération de modèles sismiques est possible, plus réalistes que les modèles actuels, parce que reflétant notre méconnaissance du système Terre
How can we study earthquakes, these complex phenomenon occurring so deep under our feet that we cannot observe them directly? One unfortunate aspect of the problem is that we have to rely on measurements acquired at the surface of the Earth. These observations are incomplete, and the imagery of earthquakes is subject to biases induced by numerous approximations. Most of these approximations cannot be avoided, and stem from the poor resolution of the measurements, the inherent lack of knowledge of the physics of the Earth interior, and the bias induced by our modeling procedures. The imperfections of our models question our ability to robustly investigate earthquakes rupture, and thus to understand the physics driving them. The quest for robust images needs a thorough and exhaustive examination of the uncertainties that potentially corrupt the modeling procedure and its results, at least not to interpret improbable characteristics. Although measurement errors are usually accounted for, other kinds of approximations are overlooked. Here, we show that the impact of our simplified description of the Earth’s interior on earthquake models is significant, especially for the events with a large magnitude. We concentrate on two main sources of approximation: the architecture of seismogenic faults, and the temporal complexity of seismic and aseismic processes at play on these faults. We present two methodological developments allowing to estimate and account for uncertainties deriving from these approximations in modeling procedures. In particular, we show that introducing the uncertainties deriving from our approximation of the Earth’s physics is necessary to infer robust and realistic earthquake source models. Our analyses is supported by the use of probabilistic modeling approaches, allowing to explore the diversity and uncertainties of possible models
APA, Harvard, Vancouver, ISO, and other styles
8

May, David. "The TLC Method for Modeling Creep Deformation and Rupture." Honors in the Major Thesis, University of Central Florida, 2014. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/1620.

Full text
Abstract:
This thesis describes a novel new method, termed the Tangent-Line-Chord (TLC) method, that can be used to more efficiently model creep deformation dominated by the tertiary regime. Creep deformation is a widespread mechanical mode of failure found in high-stress and temperature mechanical systems. To accurately simulate creep and its effect on structures, researchers utilize finite element analysis (FEA). General purpose FEA packages require extensive amounts of time and computer resources to simulate creep softening in components because of the large deformation rates that continuously evolve. The goal of this research is to employ multi-regime creep models, such as the Kachanov-Rabotnov model, to determine a set of equations that will allow creep to be simulated using as few iterations as possible. The key outcome is the freeing up of computational resources and the saving of time. Because both the number of equations and the value of material constants within the model change depending on the approach used, programming software will be utilized to automate this analytical process. The materials being considered in this research are mainly generic Ni-based superalloys, as they exhibit creep responses that are dominated by secondary and tertiary creep.
B.S.M.E.
Bachelors
Mechanical and Aerospace Engineering
Engineering and Computer Science
Mechanical Engineering
APA, Harvard, Vancouver, ISO, and other styles
9

Di, Donfrancesco Fabrizio. "Reduced Order Models for the Navier-Stokes equations for aeroelasticity." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS603.

Full text
Abstract:
Le coût d’une simulation numérique aéroélastique peut devenir trop onéreuse lorsque une analyse paramétrique à haut fidélité est requise. Dans ce contexte, des Modèles d'Ordre Réduit (MOR) ont été développés en vue de réduire le coût de calcul des simulations numériques en préservant un haut niveau de précision. Ce travail de thèse porte sur la construction d'un MOR pour les équations de Navier-Stokes en tenant compte d'un maillage déformable dans le cas d'une application aéroélastique. Une base modale pour l'écoulement est obtenue via la Décomposition Orthogonale aux valeurs propres et une projection Galerkin est utilisée pour réduire le système d'équations de la mécanique des fluides. Pour pouvoir prendre en compte les non-linéarités des équation de Navier-Stokes une méthode de projection masquée est mise en œuvre et évaluée pour différent cas test avec maillage fixe. Le MOR est ensuite adapté pour prendre en compte des maillages déformables. Finalement, une méthode réduite spectrale en temps (ROTSM) a été formulée afin de répondre aux problèmes de stabilité qui concernent le MORs avec projection dans le domaine de la mécanique des fluides. Une évaluation du MOR obtenu est ensuite menée sur des études paramétriques pour des applications aéroélastiques
The numerical prediction of aeroelastic systems responses becomes unaffordable when parametric analyses with high-fidelity CFD are required. Reduced order modeling (ROM) methods have therefore been developed in view of reducing the costs of the numerical simulations while preserving a high level of accuracy. The present thesis focuses on the family of projection based methods for the compressible Navier-Stokes equations involving deforming meshes in the case of aeroelastic applications. A vector basis obtained by Proper Orthogonal Decomposition (POD) combined to a Galerkin projection of the system equations is used in order to build a ROM for fluid mechanics. Masked projection approaches are therefore implemented and assessed for different test cases with fixed boundaries in order to provide a fully nonlinear formulation for the projection-based ROMs. Then, the ROM is adapted in the case of deforming boundaries and aeroelastic applications in a parametric context. Finally, a Reduced Order Time Spectral Method (ROTSM) is formulated in order to address the stability issues which involve the projection-based ROMs for fluid mechanics applications
APA, Harvard, Vancouver, ISO, and other styles
10

Hanada, Raíza Tamae Sarkis. "A noisy-channel based model to recognize words in eye typing systems." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-07112018-105429/.

Full text
Abstract:
An important issue with eye-based typing iis the correct identification of both whrn the userselects a key and which key is selected. Traditional solutions are based on predefined gaze fixation time, known as dwell-time methods. In an attempt to improve accuracy long dwell times are adopted, which un turn lead to fatigue and longer response limes. These problems motivate the proposal of methods free of dwell-time, or with very short ones, which rely on more robust recognition techniques to reduce the uncertainty about user\'s actions. These techniques are specially important when the users have disabilities which affect their eye movements or use inexpensive eye trackers. An approach to deal with the recognition problem is to treat it as a spelling correction task. An usual strategy for spelling correction is to model the problem as the transmission of a word through a noisy-channel, such that it is necessary to determine which known word of a lexicon is the received string. A feasible application of this method requires the reduction of the set of candidate words by choosing only the ones that can be transformed into the imput by applying up to k character edit operations. This idea works well on traditional typing because the number of errors per word is very small. However, this is not the case for eye-based typing systems, which are much noiser. In such a scenario, spelling correction strategies do not scale well as they grow exponentially with k and the lexicon size. Moreover, the error distribution in eye typing is different, with much more insertion errors due to specific sources, of noise such as the eye tracker device, particular user behaviors, and intrinsic chracteeristics of eye movements. Also, the lack of a large corpus of errors makes it hard to adopt probabilistic approaches based on information extracted from real world data. To address all these problems, we propose an effective recognition approach by combining estimates extracted from general error corpora with domain-specific knowledge about eye-based input. The technique is ablçe to calculate edit disyances effectively by using a Mor-Fraenkel index, searchable using a minimun prfect hashing. The method allows the early processing of most promising candidates, such that fast pruned searches present negligible loss in word ranking quality. We also propose a linear heuristic for estimating edit-based distances which take advantage of information already provided by the index. Finally, we extend our recognition model to include the variability of the eye movements as source of errors, provide a comprehensive study about the importance of the noise model when combined with a language model and determine how it affects the user behaviour while she is typing. As result, we obtain a method very effective on the task of recognizing words and fast enough to be use in real eye typing systems. In a transcription experiment with 8 users, they archived 17.46 words per minute using proposed model, a gain of 11.3% over a state-of-the-art eye-typing system. The method was particularly userful in more noisier situations, such as the first use sessions. Despite significant gains in typing speed and word recognition ability, we were not able to find statistically significant differences on the participants\' perception about their expeience with both methods. This indicates that an improved suggestion ranking may not be clearly perceptible by the users even when it enhances their performance.
Um problema importante em sistemas de digitação com os olhos é a correta identificação tanto de quando uma letra é selecionada como de qual letra foi selecionada pelo usuário. As soluções tradicionais para este problema são baseadas na verificação de quanto tempo o olho permanece retido em um alvo. Se ele fica por um certo limite de tempo, a seleção é reconhecida. Métodos em que usam esta ideia são conhecidos como baseados em tempo de retenção (dwell time). É comum que tais métodos, com intuito de melhorar a precisão, adotem tempos de retenção alto. Isso, por outro lado, leva à fadiga e tempos de resposta altos. Estes problemas motivaram a proposta de métodos não baseados em tempos de retenção reduzidos, que dependem de técnicas mais robustas de reconhecimento para inferir as ações dos usuários. Tais estratégias são particularmente mais importantes quando o usuário tem desabilidades que afetam o movimento dos olhos ou usam dispositivos de rastreamento ocular (eye-trackers) muito baratos e, portanto, imprecisos. Uma forma de lidar com o problema de reconhecimento das ações dos usuários é tratá-lo como correção ortográfica. Métodos comuns para correção ortográfica consistem em modelá-lo como a transmissão de uma palavra através de um canal de ruído, tal que é necessário determinar que palavra de um dicionário corresponde à string recebida. Para que a aplicação deste método seja viável, o conjunto de palavras candidatas é reduzido somente àquelas que podem ser transformadas na string de entrada pela aplicação de até k operações de edição de carácter. Esta ideia funciona bem em digitação tradicional porque o número de erros por palavra é pequeno. Contudo, este não é o caso de digitação com os olhos, onde há muito mais ruído. Em tal cenário, técnicas de correção de erros ortográficos não escalam pois seu custo cresce exponencialmente com k e o tamanho do dicionário. Além disso, a distribuição de erros neste cenário é diferente, com muito mais inserções incorretas devido a fontes específicas de ruído como o dispositivo de rastreamento ocular, certos comportamentos dos usuários e características intrínsecas dos movimentos dos olhos. O uso de técnicas probabilísticas baseadas na análise de logs de digitação também não é uma alternativa uma vez que não há corpora de dados grande o suficiente para tanto. Para lidar com todos estes problemas, propomos um método efetivo de reconhecimento que combina estimativas de corpus de erros gerais com conhecimento específico sobre fontes de erro encontradas em sistemas de digitação com os olhos. Nossa técnica é capaz de calcular distâncias de edição eficazmente usando um índice de Mor-Fraenkel em que buscas são feitas com auxílio de um hashing perfeito mínimo. O método possibilita o processamento ordenado de candidatos promissores, de forma que as operações de busca podem ser podadas sem que apresentem perda significativa na qualidade do ranking. Nós também propomos uma heurística linear para estimar distância de edição que tira proveito das informações já mantidas no índice, estendemos nosso modelo de reconhecimento para incluir erros vinculados à variabilidade decorrente dos movimentos oculares e fornecemos um estudo detalhado sobre a importância relativa dos modelos de ruído e de linguagem. Por fim, determinamos os efeitos do modelo no comportamento do usuário enquanto ele digita. Como resultado, obtivemos um método de reconhecimento muito eficaz e rápido o suficiente para ser usado em um sistema real. Em uma tarefa de transcrição com 8 usuários, eles alcançaram velocidade de 17.46 palavras por minuto usando o nosso modelo, o que corresponde a um ganho de 11,3% sobre um método do estado da arte. Nosso método se mostrou mais particularmente útil em situação onde há mais ruído, tal como a primeira sessão de uso. Apesar dos ganhos claros de velocidade de digitação, não encontramos diferenças estatisticamente significativas na percepção dos usuários sobre sua experiência com os dois métodos. Isto indica que uma melhoria no ranking de sugestões pode não ser claramente perceptível pelos usuários mesmo quanto ela afeta positivamente os seus desempenhos.
APA, Harvard, Vancouver, ISO, and other styles
11

Mohamed, Ahmed Tohami Abdelhay. "The rupture in state-society relationships and the prominence of youth activism in Egypt : opportunities, strategies and new models of mobilization." Thesis, Durham University, 2013. http://etheses.dur.ac.uk/8502/.

Full text
Abstract:
This thesis examines the development of youth activism in Egypt as key social actors during the latter years of Mubarak’s presidency (from 2000) and leading into the tumultuous events of the Revolution in January 2011. The assessment draws on social movement theory to provide an analytical framework, specifically the political process model. It first offers an analytical narrative of the political structures which have developed within Egypt in the modern era and which have provided the structural context within which such movements have emerged and developed, notably cycles of contentious politics. The narrative identifies the impact of early Nasserist hegemony, the subsequent embedding of corporatist structures for socio-political organisation, and the inhibiting effects these had on the development of autonomous social movements until the contemporary period. Youth and Student movements remained key political actors during specific historical periods but even these were severely constrained after 1979. This provides the structural scene setting for our in-depth study of contemporary youth activism. In attempting to explain the contemporary re-emergence of youth activism during the January Revolution, the thesis proceeds to examine the political opportunities which were presented to social movements in general, and youth activism specifically, during the era of Mubarak’s rule, and with an emphasis on the period from 2000-2010. Developments in Egypt are analysed through the ordering devices offered by SMT, including the progressive rupturing of the state-society relationship, the high level of grievances among the population, the level of institutional access, and divisions among the ruling elite. The thesis adds an additional category – the role played by transnational and external factors – which emerged as an important influence in the preceding narrative of Egyptian political development but which have traditionally been neglected by SMT. The thesis further uses the analytical tools of SMT to examine two particular forms of youth mobilisation; the student movements and the April 6th movement. Successive chapters examine the strategic choices, organisation, framings and mobilisation strategies of these movements, drawing heavily on intensive semi- and un-structured interviewing and data collection, both in person and through the formats and devices of the social movements themselves (such as Facebook, Twitter and movement websites). The thesis demonstrates that these youth activism are better understood as New Social Movements (NSM) rather than conventional social movements. They have developed through horizontal networking rather than vertical and hierarchical organisations. They have drawn substantially on the political opportunities offered by transnational and external factors. In both these aspects, they have made good use of new informational and communications technologies, specifically the Internet, which create communicative linkages but do not offer a clear route to the next stage of formal political organisation (explaining in part the limitations of these movements). Finally, they demonstrate the importance of generational politics in Egypt, the grievances of which lie at the core of the rupture between state and society.
APA, Harvard, Vancouver, ISO, and other styles
12

Huang, Xueying. "In Vivo MRI-based three-dimensional fluid-structure interaction models and mechanical image analysis for human carotid atherosclerotic plaques." Worcester, Mass. : Worcester Polytechnic Institute, 2009. http://www.wpi.edu/Pubs/ETD/Available/etd-050409-100213/.

Full text
Abstract:
Dissertation (Ph.D.)--Worcester Polytechnic Institute.
Keywords: atherosclerotic plaque; fluid-structure Interaction models; MRI-based; rupture; plaque vulnerability assessment. Includes bibliographical references (leaves 116-127).
APA, Harvard, Vancouver, ISO, and other styles
13

Tanne, Erwan. "Variational phase-field models from brittle to ductile fracture : nucleation and propagation." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLX088/document.

Full text
Abstract:
Les simulations numériques des fissures fragiles par les modèles d’endommagement à gradient deviennent main- tenant très répandues. Les résultats théoriques et numériques montrent que dans le cadre de l’existence d’une pre-fissure la propagation suit le critère de Griffith. Alors que pour le problème à une dimension la nucléation de la fissure se fait à la contrainte critique, cette dernière propriété dimensionne le paramètre de longueur interne.Dans ce travail, on s’attarde sur le phénomène de nucléation de fissures pour les géométries communément rencontrées et qui ne présentent pas de solutions analytiques. On montre que pour une entaille en U- et V- l’initiation de la fissure varie continument entre la solution prédite par la contrainte critique et celle par la ténacité du matériau. Une série de vérifications et de validations sur diffèrent matériaux est réalisée pour les deux géométries considérées. On s’intéresse ensuite à un défaut elliptique dans un domaine infini ou très élancé pour illustrer la capacité du modèle à prendre en compte les effets d’échelles des matériaux et des structures.Dans un deuxième temps, ce modèle est étendu à la fracturation hydraulique. Une première phase de vérification du modèle est effectuée en stimulant une pré-fissure seule par l’injection d’une quantité donnée de fluide. Ensuite on étudie la simulation d’un réseau parallèle de fissures. Les résultats obtenus montrent qu’il a qu’une seule fissure qui se propage et que ce type de configuration minimise mieux l’énergie la propagation d’un réseau de fractures. Le dernier exemple se concentre sur la stabilité des fissures dans le cadre d’une expérience d’éclatement à pression imposée pour l’industrie pétrolière. Cette expérience d’éclatement de la roche est réalisée en laboratoire afin de simuler les conditions de confinement retrouvées lors des forages.La dernière partie de ce travail se concentre sur la rupture ductile en couplant le modèle à champ de phase avec les modèles de plasticité parfaite. Grâce à l’approche variationnelle du problème on décrit l’implantation numérique retenue pour le calcul parallèle. Les simulations réalisées montrent que pour une géométrie légèrement entaillée la phénoménologie des fissures ductiles comme par exemple la nucléation et la propagation sont en concordances avec ceux reportées dans la littérature
Phase-field models, sometimes referred to as gradient damage, are widely used methods for the numerical simulation of crack propagation in brittle materials. Theoretical results and numerical evidences show that they can predict the propagation of a pre-existing crack according to Griffith’s criterion. For a one- dimensional problem, it has been shown that they can predict nucleation upon a critical stress, provided that the regularization parameter is identified with the material’s internal characteristic length.In this work, we draw on numerical simulations to study crack nucleation in commonly encountered geometries for which closed-form solutions are not available. We use U- and V-notches to show that the nucleation load varies smoothly from the one predicted by a strength criterion to the one of a toughness criterion when the strength of the stress concentration or singularity varies. We present validation and verification of numerical simulations for both types of geometries. We consider the problem of an elliptic cavity in an infinite or elongated domain to show that variational phase field models properly account for structural and material size effects.In a second movement, this model is extended to hydraulic fracturing. We present a validation of the model by simulating a single fracture in a large domain subject to a control amount of fluid. Then we study an infinite network of pressurized parallel cracks. Results show that the stimulation of a single fracture is the best energy minimizer compared to multi-fracking case. The last example focuses on fracturing stability regimes using linear elastic fracture mechanics for pressure driven fractures in an experimental geometry used in petroleum industry which replicates a situation encountered downhole with a borehole called burst experiment.The last part of this work focuses on ductile fracture by coupling phase-field models with perfect plasticity. Based on the variational structure of the problem we give a numerical implementation of the coupled model for parallel computing. Simulation results of a mild notch specimens are in agreement with the phenomenology of ductile fracture such that nucleation and propagation commonly reported in the literature
APA, Harvard, Vancouver, ISO, and other styles
14

Poloni, Alexandre. "Étude de la sensibilité à la fragilisation par l’hydrogène de deux alliages de titane, le T40 et le TA6V ELI, sous polarisation cathodique en eau de mer par une approche locale de la rupture." Thesis, La Rochelle, 2020. http://www.theses.fr/2020LAROS003.

Full text
Abstract:
Les objectifs de cette étude sont de comprendre les mécanismes de pénétration de l’hydrogène dans le titane sous polarisation cathodique en milieu marin et d’en évaluer les risques afin de donner un certain nombre de recommandations pour l’ingénieur. Les alliages T40 monophasé α et TA6V ELI biphasé α/β ont été choisis afin d’interroger l’influence de chacune des phases (α et β) et de leur distribution sur les phénomènes de fragilisation par l’hydrogène. Pour différents potentiels cathodiques, les cinétiques d’absorption de l’hydrogène dans différents milieux et particulièrement en eau de mer artificielle ont été étudiées. Elles sont très comparables entre les deux types d’alliages bien que les mécanismes mis en jeu et les concentrations en hydrogène atteintes pour chacun soient différents. La localisation des hydrures influencée par la structure métallographique des alliages est questionnée. Celle-ci conditionne les évolutions des propriétés mécaniques étudiées au travers d’essais de traction pour différentes orientations de sollicitation, teneurs en hydrogène et rayons d’entaille. La modélisation par éléments finis de chacun des essais permet d’accéder à des critères locaux d’endommagement et de rupture (contrainte hydrostatique versus déformation plastique équivalente). L’ensemble des travaux nous a permis de proposer des abaques reliant les évolutions des propriétés mécaniques à la concentration en hydrogène. D’autre part, une comparaison des résultats des expériences en laboratoire avec ceux des essais par couplage galvanique avec des anodes sacrificielles en eau de mer naturelle nous a permis de valider un seuil de potentiel de couplage galvanique maximal utilisable de -1,1 V/ECS en eau de mer, dans le cadre des conditions d’étude
This study aims at understanding the hydrogen absorption mechanisms in titanium under cathodic polarization in seawater and then to provide risk assessments for engineers. The single-phase T40 and the two-phase TA6V ELI alloys were employed to study the influence of α and β phases as well as their distribution on the hydrogen embrittlement mechanisms. Hydrogen absorption kinetics were studied for several cathodic potentials in artificial seawater, while other electrolytes were used to validate some hypothesis. These kinetics are observed to be similar between the two alloys despite the differences in the absorption mechanisms and the hydrogen concentration at saturation. Then, this work investigates the role of the metallographic structure on the localisation of hydrides. The evolutions of the mechanical properties were studied by tensile tests on various orientations of solicitation, hydrogen concentrations and notches. As a result, these properties are observed to depend on the hydrogen concentration and the hydrides location. Models by finite elements were also used to determine local mechanical criterions of internal damage and fracture in term of hydrostatic stress and equivalent plastic strain. It allowed us to establish an abacus linking the evolutions of the mechanical properties to the hydrogen concentration. Moreover, this work also results in the determination of a maximum threshold of the cathodic potential at -1,1 V/SCE by comparing the laboratory results with those of on-site galvanic coupling by sacrificial anodes in natural seawater
APA, Harvard, Vancouver, ISO, and other styles
15

Antoinat, Léonard. "Contribution à la caractérisation de la déformation et de la rupture dynamique de structures sous impact : Modélisations et approche expérimentale." Thesis, Paris, ENSAM, 2014. http://www.theses.fr/2014ENAM0037/document.

Full text
Abstract:
L'objectif de ces travaux de thèse est de proposer des approches de modélisation et d'expérimentation de l'impact de structures déformables et indéformables sur différents milieux. Différents modèles analytiques et des simulations numériques sont développés en comparaison aux résultats expérimentaux. Une première partie se consacre à la caractérisation de la similitude entre la réponse à l'impact à l'eau d'un solide et la réponse d'un solide impactant une structure déformable. Des simulations éléments finis (EF) et SPH sont réalisées pour l'impact à l'eau d'un tube cylindrique (sans rupture). Un modèle analytique d'impact à l'eau est proposé pour prédire l'évolution de l'effort (pic, durée). L'analyse des résultats permet de dimensionner un programmateur d'impact solide reproduisant le pic d'effort. Des simulations EF de l'impact sur un tube cylindrique, à géométrie adapté, dans la direction longitudinale, sont réalisées et comparées à quelques expériences tests. Le «flambage dynamique» (dû au comportement inélastique du matériau et aux ondes de déformations) des tubes est alors observé. Une seconde partie traite du cas de la perforation sous impact d'une tôle mince à faibles vitesses d'impact (< 10 m/s, vitesse de déformation < 1000 s-1). Des essais sur puits de chute instrumenté (force, déplacement, déformée de tôle, avancée de fissure) sont analysés. Des simulations EF en éléments coques avec un critère de rupture ductile par endommagement sont réalisées. Les paramètres de rupture dynamique sont identifiés par méthode inverse à l'aide d'essais de résilience Charpy sur l'alliage d'aluminium de désignation 2024 T3. Une analyse des pics de force lors de l'impact permet une meilleure compréhension des mécanismes de perforation. En parallèle, un nouveau modèle analytique, basé sur les énergies impliquées lors de l'impact, est proposé et comparé aux simulations EF. L'étude numérique de la perforation est étendue aux grandes vitesses d'impact et de déformation (100 - 1000 m/s, vitesse de déformation <100 000 s-1) pour identifier les transitions des différents mécanismes de perforation connus (pétalisation, fragmentation des pétales, fragmentation complète)
The objective of this work is to propose approaches to model and to assess experimentally the structural impact on different media. A variety of analytic models and numerical simulations are developed comparing to experimental results. The first part of this work presents a discussion on the similitude between a water impact and an impact on a deformable solid structure. Water impact simulations of a deformable cylinder (without rupture) are performed by finite elements (FE, Coupled Eulerian Lagrangian) and SPH analysis. An analytical model of water impact is proposed for the prediction of peak force evolution. The analysis of results permits to design an impact programmer reproducing this peak force. FE longitudinal impact simulations on cylindrical tubes, with an adapted geometry, are performed and compared with some experiments. The “dynamic buckling” of tubes under impact (due to the material inelastic behavior and to strain waves) is observed. The second part deals with the low velocity perforation (< 10 m/s, strain rate < 1000 s-1) of thin plates. Some experiments on an instrumented drop test (force, displacement, plate shape, crack propagation) are analyzed. Shell FE simulations, with a damage rupture criteria implemented are performed. Parameters are identified by inverse method with the help of Charpy tests made on 2024 T3 aluminum alloy. An analysis of the peak force, during impact, leads to a good understanding of the perforation mechanism. In parallel, a new analytical model, based on an energetic approach of the perforation, is proposed and compared with FE simulations. The numerical perforation study is extended to high velocities and high strain rates (100 - 1000m/s, strain rate < 100 000 s-1) in order to identify different well-known transitions of perforation (Petalisation, petals' fragmentation, total plate's fragmentation)
APA, Harvard, Vancouver, ISO, and other styles
16

Renou, Julien. "Observations and modeling of the seismic rupture development based on the analysis of source time functions." Thesis, Université de Paris (2019-....), 2020. https://theses.md.univ-paris-diderot.fr/RENOU_Julien_va2.pdf.

Full text
Abstract:
Notre compréhension de la physique de la source sismique, qui donne naissance à des séismes de toute magnitude, requiert l’observation d’une large population d’événements. Les méthodes d’analyse systématique de la sismicité mondiale remplissent ce rôle et permettent d’extraire les propriétés des séismes puis de les confronter aux modèles de rupture sismique. La méthode SCARDEC fait partie de ces méthodes et retrouve les fonctions source d’événements sur une large gamme de magnitudes (Mw > 5.7). La fonction source, puisqu’elle décrit l’évolution tem porelle du taux de moment, est un observable privilégié pour l’analyse des propriétés transitoires de la rupture. L’objectif de notre étude est d’observer le développement de la rupture lors de ces séismes afin de plus précisément contraindre les modèles cinématiques et dynamiques de la source. La première partie de notre travail s’intéresse au développement des séismes à partir du catalogue SCARDEC. La phase menant au pic de la fonction source (“phase de dévelop pement”) est extraite pour caractériser son évolution. À partir du calcul des accélérations de moment pour des taux de moment donnés, nous observons que l’évolution du taux de moment pendant la phase de développement est indépendante de la magnitude finale. Une analyse quantitative de l’augmentation du taux de moment en fonction du temps indique que cette phase ne respecte pas la dépendance en t 2 de la loi auto-similaire, suggérant une variation transitoire de la vitesse de rupture et/ou de la chute de contrainte. Ces observations sont dans un deuxième temps confrontées aux modèles cinématiques de la source. Un modèle de crack avec des variations radiales de la vitesse de rupture, associées à une faible chute de contrainte, met en évidence que la corrélation entre vitesse de rupture et vitesse de glissement est un ingrédient nécessaire au comportement transitoire de la phase de développement vu dans les observations. Nous générons ensuite à partir du modèle composite fractal RIK des catalogues synthétiques de fonctions source. Ces derniers montrent également que la corrélation entre la vitesse de rupture et la vitesse de glissement, ainsi que la durée du temps de montée, ont une influence sur les valeurs de l’accélération de moment. Nous développons finalement des modèles dynamiques hétérogènes qui prennent en compte la physique de la rupture. Les dis tributions hétérogènes du paramètre de friction Dc et de la contrainte initiale τ0 sur la faille participent à générer des scénarios de rupture particulièrement réalistes. La propagation de la rupture est en particulier influencée par ces deux paramètres dynamiques qui entraînent une direction de propagation préférentielle couplée à une variabilité plus locale de la vitesse de rupture. La corrélation entre vitesse de rupture et vitesse de glissement, mise en lumière dans les modèles cinématiques précédents, est retrouvée et permet la reproduction des observations SCARDEC. Ces résultats devraient fournir des contraintes additionnelles pour la constitution de scenarios réalistes de la dynamique de la rupture
Our knowledge of earthquake source physics, giving rise to events of very different magnitudes, requires observations of a large population of earthquakes. The development of systematic analysis tools for the global seismicity meets these expectations, and allows us to extract the generic properties of earthquakes, which can then be integrated into models of the rupture process. Following this approach, the SCARDEC method is able to retrieve source time func tions of events on a large range of magnitude (Mw > 5.7). The source time function (which describes the temporal evolution of the moment rate) is suitable for the analysis of transient rupture properties which provide insights into the generation of earthquakes of various sizes. The purpose of our study is to observe the rupture development of such earthquakes in order to add better constraints on kinematic and dynamic source models. The first part of our work focuses on the development of earthquakes through the analysis of the SCARDEC catalog. The phase leading to the peak of the source time function (“development phase”) is extracted to characterize its evolution. From the computation of moment accelerations at prescribed mo ment rates, we observe that the evolution of the moment rate during the developement phase is independent of the final magnitude. A quantitative analysis of the moment rate increase as a function of time further indicates that this phase does not respect the steady t 2 self-similar growth, suggesting a transient variation of rupture velocity and/or stress drop. In a second part, these observations are compared with kinematic source models. A crack model with ra dial variations of the rupture velocity combined with low stress drop highlights that correlation between rupture velocity and slip velocity is a key feature for the transient behavior of the development phase previously observed. We then generate, using the composite fractal RIK model, synthetic catalogs of source time functions. This also supports that the correlation bet ween rupture velocity and slip velocity, as well as the duration of the rise-time, have a strong effect on moment acceleration values. We finally develop heterogeneous dynamic models which take into consideration rupture physics. Heterogeneous distributions of the friction parameter Dc and the initial stress τ0 contribute to generate highly realistic rupture scenarios. Rupture propagation is strongly influenced by these two dynamic parameters which induce a clear pre ferential direction of propagation together with a local variability of the rupture velocity. The correlation between rupture velocity and slip velocity highlighted by the previous kinematic models is retrieved and allows to reproduce the SCARDEC observations. These findings are expected to put further constraints on future realistic dynamic rupture scenarios
APA, Harvard, Vancouver, ISO, and other styles
17

Mefti, Nacim. "Mise en oeuvre d'un modèle mécanique de l'adhésion cellulaire : approche stochastique." Thesis, Vandoeuvre-les-Nancy, INPL, 2006. http://www.theses.fr/2006INPL099N/document.

Full text
Abstract:
L'adhésion cellulaire est un phénomène important en biologie. Le but de ce travail est le développement d'un modèle mécanique décrivant des phénomènes d'adhésion cellulaire à différentes échelles. La première échelle, microscopique, a pour objet la description des phénomènes cinétiques moléculaires durant le rolling. La seconde échelle, mésoscopique, est relative à la modélisation des déformations actives de la cellule durant la motilité. La troisième échelle, dite macroscopique, concerne la description de l'évolution dans le temps de l'adhésion d'une population de cellules. Les simulations réalisées mettent en évidence le rolling, et la déformation active de la cellule
Cell adhesion is an important phenomenon in biology, especially in the immune defence and tissue growth.We focus in this work on the development of a mechanical model for the description of the cell adhesion in a multiscal context. The first one is microscopic scale, which describes the molecular rupture and adhesion kinetics.At the mesoscopic scale, we model the active deformation of the cell during the motility phenomenon. At the macroscopic scale, we model the time evolution of the adhesion of cell population, under the action of the fluid. Numerical simulations emphasize the rolling phenomenon and the active deformation of a cell
APA, Harvard, Vancouver, ISO, and other styles
18

Savio, Daniele. "Nanoscale phenomena in lubrication : From atomistic simulations to their integration into continuous models." Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00961197.

Full text
Abstract:
The modern trends in lubrication aim at reducing the oil quantity in tribological applications. As a consequence, the film thickness in the contact zone decreases significantly and can reach the order of magnitude of a few nanometres. Hence, the surface separation is ensured by very few lubricant molecules. Atomistic simulations based on the Molecular Dynamics method are used to analyze the local behavior of these severely confined films. A particular attention is paid to the occurrence of wall slip: predictive models and analytical laws are formulated to quantify and predict this phenomenon as a function of the surface-lubricant pair or the local operating conditions in a contact interface. Then, the coupling between Molecular Dynamics simulations and macroscopic models is explored. The classical lubrication theory is modified to include slip effects characterized previously. This approach is employed to study an entire contact featuring a nano-confined lubricant in its center, showing a severe modification of the film thickness and friction. Finally, the lubricant quantity reduction is pushed to the limits up to the occurrence of local film breakdown and direct surface contact. In this scenario, atomistic simulations allow to understand the relationship between the configuration of the last fluid molecules in the contact and the local tribological behavior.
APA, Harvard, Vancouver, ISO, and other styles
19

Goda, Ibrahim. "Micromechanical models of network materials presenting internal length scales : applications to trabecular bone under stable and evolutive conditions." Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0055/document.

Full text
Abstract:
Des méthodes micromécaniques spécifiques ont été développées pour la détermination du comportement effectif de matériaux cellulaires dotés d’une architecture discrète à l’échelle microscopique. La méthode d’homogénéisation discrète a été appliquée à des structures tissées monocouches ainsi qu’à l’os trabéculaire. La topologie discrète initiale de ces milieux est remplacée à l’échelle mésoscopique par un milieu effectif anisotrope micropolaire, qui rend compte des effets d’échelles observés. Ces méthodes d’homogénéisation permettent d’accéder à des propriétés classiques et non classiques dont la mesure expérimentale est souvent difficile. Des modèles 3D ont été développé afin de décrire la rupture fragile et ductile de l’os trabéculaire, incorporant des effets de taille des surfaces d’écoulement plastique. Nous avons construit par des analyses éléments finis de la microstructure de l’os trabéculaire un milieu de substitution 3D homogène, orthotrope de type couple de contraintes, sur la base d’une équivalence en énergie. Les tissus osseux ont la capacité d’adapter leur densité locale et leur taille et forme aux stimuli mécaniques. Nous avons développé des modèles de remodelage interne et externe dans le cadre de la thermodynamique des processus irréversibles, aux échelles cellulaire et macroscopique. Finalement, le remodelage interne anisotrope a été couplé à l’endommagement de fatigue, dans le cadre de la théorie continue de l’endommagement
A methodology based on micromechanics has been developed to determine the effective behavior of network materials endowed with a discrete architecture at the microscopic level. It relies on the discrete homogenization method, which has been applied to textile monolayers and trabecular bones. The initially discrete topology of the considered network materials results after homogenization at the mesoscopic level in anisotropic micropolar effective continuum, which proves able to capture the observed internal scale effects. Such micromechanical methods are useful to remedy the difficulty to measure the effective mechanical properties at the intermediate mesoscopic level scale. The bending and torsion responses of vertebral trabecular bone beam specimens are formulated in both static and dynamic situations, based on the Cosserat theory. 3D models have been developed for describing the multiaxial yield and brittle fracture behavior of trabecular bone, including the analysis of size-dependent non-classical plastic yield. We have constructed by FE analyses a homogeneous, orthotropic couple-stress continuum model as a substitute of the 3D periodic heterogeneous cellular solid model of vertebral trabecular bone, based on the equivalent strain energy approach. Bone tissues are able to adapt their local density and load bearing capacities as well as their size and shape to mechanical stimuli. We have developed models for combined internal and external bone remodeling in the framework of the thermodynamics of irreversible processes, at both the cellular and macroscopic levels. We lastly combined anisotropic internal remodeling with fatigue continuum damage
APA, Harvard, Vancouver, ISO, and other styles
20

Nguyen, Quoc Lan. "Instabilités liées au frottement des solides élastiques : modélisation de l'initiation des séismes." Université Joseph Fourier (Grenoble), 1999. http://www.theses.fr/1999GRE10016.

Full text
Abstract:
Ce travail, qui a pour origine l'etude de l'initiation des seismes comporte trois parties. Une presentation des divers modeles mecaniques du frottement est donnee dans la premiere partie. La deuxieme partie est destinee a l'analyse mathematique des problemes dynamiques de frottement dependant de glissement en elasticite et visco-elasticite. Dans le cas elastique avec contrainte normale imposee, on obtient l'existence de la solution pour les problemes plan et anti-plan. L'etude de l'unicite permet de degager le cadre dans lequel le probleme est bien pose. Dans le cas visco-elastique, on etablit l'existence et l'unicite de la solution. De plus, on montre que la solution converge vers la solution du probleme elastique lorsque les effets de la viscosite diminuent. Dans la troisieme partie, on presente une approche numerique de l'initiation d'une evolution instable du glissement sur une faille finie, basee sur l'analyse spectrale. On calcule une constante universelle de stabilite avec laquelle on peut juger s'il y a ou non une evolution de type catastrophique. On met en evidence la partie dominante qui exprime l'essentiel de l'evolution et qui opere sur une echelle de temps donnee par le spectre. Les simulations numeriques montrent les caracteristiques du glissement pendant la periode de l'initiation dont la duree peut varier fortement de quelques secondes a quelques heures.
APA, Harvard, Vancouver, ISO, and other styles
21

Schwaller, Loïc. "Exact Bayesian Inference in Graphical Models : Tree-structured Network Inference and Segmentation." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS210/document.

Full text
Abstract:
Cette thèse porte sur l'inférence de réseaux. Le cadre statistique naturel à ce genre de problèmes est celui des modèles graphiques, dans lesquels les relations de dépendance et d'indépendance conditionnelles vérifiées par une distribution multivariée sont représentées à l'aide d'un graphe. Il s'agit alors d'apprendre la structure du modèle à partir d'observations portant sur les sommets. Nous considérons le problème d'un point de vue bayésien. Nous avons également décidé de nous concentrer sur un sous-ensemble de graphes permettant d'effectuer l'inférence de manière exacte et efficace, à savoir celui des arbres couvrants. Il est en effet possible d'intégrer une fonction définie sur les arbres couvrants en un temps cubique par rapport au nombre de variables à la condition que cette fonction factorise selon les arêtes, et ce malgré le cardinal super-exponentiel de cet ensemble. En choisissant les distributions a priori sur la structure et les paramètres du modèle de manière appropriée, il est possible de tirer parti de ce résultat pour l'inférence de modèles graphiques arborescents. Nous proposons un cadre formel complet pour cette approche.Nous nous intéressons également au cas où les observations sont organisées en série temporelle. En faisant l'hypothèse que la structure du modèle graphique latent subit un certain nombre de brusques changements, le but est alors de retrouver le nombre et la position de ces points de rupture. Il s'agit donc d'un problème de segmentation. Sous certaines hypothèses de factorisation, l'exploration exhaustive de l'ensemble des segmentations est permise et, combinée aux résultats sur les arbres couvrants, permet d'obtenir, entre autres, la distribution a posteriori des points de ruptures en un temps polynomial à la fois par rapport au nombre de variables et à la longueur de la série
In this dissertation we investigate the problem of network inference. The statistical frame- work tailored to this task is that of graphical models, in which the (in)dependence relation- ships satis ed by a multivariate distribution are represented through a graph. We consider the problem from a Bayesian perspective and focus on a subset of graphs making structure inference possible in an exact and e cient manner, namely spanning trees. Indeed, the integration of a function de ned on spanning trees can be performed with cubic complexity with respect to number of variables under some factorisation assumption on the edges, in spite of the super-exponential cardinality of this set. A careful choice of prior distributions on both graphs and distribution parameters allows to use this result for network inference in tree-structured graphical models, for which we provide a complete and formal framework.We also consider the situation in which observations are organised in a multivariate time- series. We assume that the underlying graph describing the dependence structure of the distribution is a ected by an unknown number of abrupt changes throughout time. Our goal is then to retrieve the number and locations of these change-points, therefore dealing with a segmentation problem. Using spanning trees and assuming that segments are inde- pendent from one another, we show that this can be achieved with polynomial complexity with respect to both the number of variables and the length of the series
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Tianyi. "Gradient-damage modeling of dynamic brittle fracture : variational principles and numerical simulations." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLX042/document.

Full text
Abstract:
Une bonne tenue mécanique des structures du génie civil en béton armé sous chargements dynamiques sévères est primordiale pour la sécurité et nécessite une évaluation précise de leur comportement en présence de propagation dynamique de fissures. Dans ce travail, on se focalise sur la modélisation constitutive du béton assimilé à un matériau élastique-fragile endommageable. La localisation des déformations sera régie par un modèle d'endommagement à gradient où un champ scalaire réalise une description régularisée des phénomènes de rupture dynamique. La contribution de cette étude est à la fois théorique et numérique. On propose une formulation variationnelle des modèles d'endommagement à gradient en dynamique. Une définition rigoureuse de plusieurs taux de restitution d'énergie dans le modèle d'endommagement est donnée et on démontre que la propagation dynamique de fissures est régie par un critère de Griffith généralisé. On décrit ensuite une implémentation numérique efficace basée sur une discrétisation par éléments finis standards en espace et la méthode de Newmark en temps dans un cadre de calcul parallèle. Les résultats de simulation de plusieurs problèmes modèles sont discutés d'un point de vue numérique et physique. Les lois constitutives d'endommagement et les formulations d'asymétrie en traction et compression sont comparées par rapport à leur aptitude à modéliser la rupture fragile. Les propriétés spécifiques du modèle d'endommagement à gradient en dynamique sont analysées pour différentes phases de l'évolution de fissures : nucléation, initiation, propagation, arrêt, branchement et bifurcation. Des comparaisons avec les résultats expérimentaux sont aussi réalisées afin de valider le modèle et proposer des axes d'amélioration
In civil engineering, mechanical integrity of the reinforced concrete structures under severe transient dynamic loading conditions is of paramount importance for safety and calls for an accurate assessment of structural behaviors in presence of dynamic crack propagation. In this work, we focus on the constitutive modeling of concrete regarded as an elastic-damage brittle material. The strain localization evolution is governed by a gradient-damage approach where a scalar field achieves a smeared description of dynamic fracture phenomena. The contribution of the present work is both theoretical and numerical. We propose a variationally consistent formulation of dynamic gradient damage models. A formal definition of several energy release rate concepts in the gradient damage model is given and we show that the dynamic crack tip equation of motion is governed by a generalized Griffith criterion. We then give an efficient numerical implementation of the model based on a standard finite-element spatial discretization and the Newmark time-stepping methods in a parallel computing framework. Simulation results of several problems are discussed both from a computational and physical point of view. Different damage constitutive laws and tension-compression asymmetry formulations are compared with respect to their aptitude to approximate brittle fracture. Specific properties of the dynamic gradient damage model are investigated for different phases of the crack evolution: nucleation, initiation, propagation, arrest, kinking and branching. Comparisons with experimental results are also performed in order to validate the model and indicate its further improvement
APA, Harvard, Vancouver, ISO, and other styles
23

Coradi, Audrey. "Modélisation du comportement mécanique des composites a matrice céramique : développement du réseau de fissures." Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0267/document.

Full text
Abstract:
Les matériaux composites à matrice céramique (CMC) sont élaborés à partir de constituants fragiles. Le comportement mécanique et le développement de la fissuration dépendent des propriétés des constituants élémentaires des CMC. La connaissance de l’influence de ces propriétés sur l’évolution de la fissuration et du comportement mécanique fournit une aide au concepteur de matériaux composites.L’objectif de ce travail est de modéliser l’évolution du réseau de fissures au sein du CMC sollicité en traction, à l’échelle du fil et à l’échelle du composite tissé. L’approche proposée est une alternative aux principaux modèles de comportement des CMC.A l’échelle du fil, l’endommagement intervient d’abord sous forme de fissures matricielles accompagnées de décohésions à l’interface fibre/matrice. Les analyses de ces deux mécanismes ont permis d’exprimer leur évolution au sein du fil en traction. Le comportement en traction résultant de l’endommagement et l’ouverture de la fissure matricielle sont aussi exprimés semi-analytiquement.Les comparaisons avec un modèle numérique de zones cohésives et avec les essais expérimentaux montrent une bonne corrélation des résultats.Enfin ces expressions à l’échelle du fil sont utilisées pour modéliser l’endommagement du fil longitudinal au sein du composite tissé en traction. De plus, un outil numérique est développé pour modéliser la fissuration matricielle inter-fil dans le composite tissé
Ceramic matrix composite materials (CMC) are elaborated from fragile constituents. Their mechanical behaviour and crack growing depend on the properties of the CMC elementary constituent. Knowing the influence of these properties on crack development and mechanical behaviour provides support to the composite material designer.This work aims at modelling the crack networks development within the CMC under axial tension, at the yarn scale as well as at the woven composite scale. The proposed approach is an alternative to the main CMC behaviour models.At the yarn scale, matrix cracking with interfacial debonding between fiber and matrix first happen. Both mechanisms are analysed and their development are expressed. The mechanical behaviour resulting from damage and the crack opening displacement are also described using semi-analytical equations. Comparisons with numerical cohesive zone model and also with experimental testing shows good correlation between results.These semi-analytical expressions are then used for modelling damage within each yarns at the woven composite scale. In addition, a numerical tool is developed for matrix cracking and interfacial debonding between yarns of the woven composite
APA, Harvard, Vancouver, ISO, and other styles
24

Jung, Sophie. "Agents infectieux et rupture de tolérance lymphocytaire B : étude des processus de maturation d'affinité et de différenciation plasmocytaire au cours d'une infection bactérienne dans un nouveau modèle knock-in autoréactif." Thesis, Strasbourg, 2013. http://www.theses.fr/2013STRAJ067.

Full text
Abstract:
Les maladies auto-immunes, qui touchent plus de 5% de la population, sont induites par une perte de la tolérance aux antigènes du Soi. Ces pathologies, généralement multifactorielles, résultent de l’effet combiné de plusieurs allèles de susceptibilité et de différents facteurs environnementaux. Les agents infectieux ont été tout particulièrement incriminés, mais les mécanismes en jeu restent encore mal élucidés. Les lymphocytes B, qui jouent un rôle central dans la pathogénie de nombreuses maladies auto-immunes, sont susceptibles d’être activés selon différents mécanismes au cours d’un processus infectieux et cette activation peut englober des cellules autoréactives. On ne sait cependant pas si cette activation peut entraîner la production d’auto-anticorps pathogènes de forte affinité et d’isotype IgG à partir du pool de cellules productrices d’auto-anticorps naturels de faible affinité, qui sont présentes de façon constitutive dans le répertoire B de l’individu sain. Nous avons mis au point un nouveau modèle murin knock-in pour des lymphocytes B présentant une affinité intermédiaire pour leur auto-antigène, la protéine HEL2X mutée (Hen-Egg Lysozyme). Ce modèle autoréactif d’affinité intermédiaire SWHEL X HEL2X, élaboré sur un fond génétique non autoimmun, permet de suivre le processus de maturation d’affinité des cellules B anti-HEL en présence de leur auto-antigène HEL2X au cours de l’infection chronique par la bactérie Borrelia burgdorferi. L’infection induit au niveau ganglionnaire une prolifération ainsi qu’une activation lymphocytaire B incluant des cellules anergiques. Certains clones autoréactifs sont capables de gagner les centres germinatifs ganglionnaires, de commuter vers l’isotype IgG et présentent des mutations somatiques au niveau de la région variable de la chaîne lourde de leur immunoglobuline, dans la zone d’interaction avec HEL2X, indiquant un processus de sélection par l’auto-antigène. Malgré un taux augmenté d’auto-anticorps d’isotype IgM, ces animaux ne produisent pas de plasmocytes capables de sécréter des auto-anticorps d’isotype IgG. Nos observations suggèrent l’existence de mécanismes de tolérance périphérique intrinsèques mis en place en particulier au niveau du centre germinatif. Un premier point de contrôle va éliminer les lymphocytes B autoréactifs ayant commuté de classe et présentant des mutations somatiques leur conférant une affinité augmentée pour l’auto-antigène tandis qu’un second point de contrôle va empêcher la différenciation en plasmocytes IgG+.Chez l’individu non prédisposé génétiquement, des mécanismes pourraient ainsi permettre de prévenir le développement d’une auto-immunité pathogène au cours d’un épisode infectieux
Autoimmune diseases, affecting more than 5% of the population, reflect a loss of tolerance to selfantigens. These multifactorial diseases result from the combined effect of several susceptibility alleles and different environmental factors. Infectious agents have been particularly incriminated but there is no clear understanding of the underlying mechanisms. B lymphocytes, that appear central to the pathogenesis of several autoimmune diseases, may be activated by several mechanisms during infectious processes and this activation can encompass autoreactive cells. Whether or not the lattercan induce the production of high-affinity pathogenic IgG isotype auto-antibodies from the naturally present low-affinity self-reactive B cells is still unknown. To gain further insight into this question, we created a new intermediate affinity autoreactive mouse model called SWHEL X HEL2X. In these mice, knock-in B cells express a B cell receptor highly specific for Hen-Egg Lysozyme (HEL) that recognizes HEL2X mutated auto-antigen with intermediate affinity. This model, generated on a non-autoimmune-prone genetic background, allows the following of anti-HEL B cells affinity maturation process in presence of their auto-antigen during Borrelia burgdorferi chronic bacterial infection. The infection leads to lymph nodes lymphoproliferation and B cell activation including anergic cells. Some autoreactive clones are able to form germinal centers, toswitch their immunoglobulin heavy chain and to introduce somatic mutations in the heavy chain variable regions on amino-acids forming direct contacts with HEL2X, suggesting an auto-antigen-driven selection process. Despite increased levels of IgM autoantibodies, infected mice are unable to generate IgG autoantibody secreting plasma-cells. These observations suggest the existence of intrinsic peripheral tolerance mechanisms operating mainly at the level of germinal centers. The first checkpoint eliminates switched autoreactive B cells with increasing affinity mutations while a secondcheckpoint avoids IgG+ plasma-cell differentiation. Thus, in genetically non predisposed individuals, tolerance mechanisms may be set-up to prevent the development of pathogenic autoimmunity during the course of an infection
APA, Harvard, Vancouver, ISO, and other styles
25

Do, Van Long. "Sequential detection and isolation of cyber-physical attacks on SCADA systems." Thesis, Troyes, 2015. http://www.theses.fr/2015TROY0032/document.

Full text
Abstract:
Cette thèse s’inscrit dans le cadre du projet « SCALA » financé par l’ANR à travers le programme ANR-11-SECU-0005. Son objectif consiste à surveiller des systèmes de contrôle et d’acquisition de données (SCADA) contre des attaques cyber-physiques. Il s'agit de résoudre un problème de détection-localisation séquentielle de signaux transitoires dans des systèmes stochastiques et dynamiques en présence d'états inconnus et de bruits aléatoires. La solution proposée s'appuie sur une approche par redondance analytique composée de deux étapes : la génération de résidus, puis leur évaluation. Les résidus sont générés de deux façons distinctes, avec le filtre de Kalman ou par projection sur l’espace de parité. Ils sont ensuite évalués par des méthodes d’analyse séquentielle de rupture selon de nouveaux critères d’optimalité adaptés à la surveillance des systèmes à sécurité critique. Il s'agit donc de minimiser la pire probabilité de détection manquée sous la contrainte de niveaux acceptables pour la pire probabilité de fausse alarme et la pire probabilité de fausse localisation. Pour la tâche de détection, le problème d’optimisation est résolu dans deux cas : les paramètres du signal transitoire sont complètement connus ou seulement partiellement connus. Les propriétés statistiques des tests sous-optimaux obtenus sont analysées. Des résultats préliminaires pour la tâche de localisation sont également proposés. Les algorithmes développés sont appliqués à la détection et à la localisation d'actes malveillants dans un réseau d’eau potable
This PhD thesis is registered in the framework of the project “SCALA” which received financial support through the program ANR-11-SECU-0005. Its ultimate objective involves the on-line monitoring of Supervisory Control And Data Acquisition (SCADA) systems against cyber-physical attacks. The problem is formulated as the sequential detection and isolation of transient signals in stochastic-dynamical systems in the presence of unknown system states and random noises. It is solved by using the analytical redundancy approach consisting of two steps: residual generation and residual evaluation. The residuals are firstly generated by both Kalman filter and parity space approaches. They are then evaluated by using sequential analysis techniques taking into account certain criteria of optimality. However, these classical criteria are not adequate for the surveillance of safety-critical infrastructures. For such applications, it is suggested to minimize the worst-case probability of missed detection subject to acceptable levels on the worst-case probability of false alarm and false isolation. For the detection task, the optimization problem is formulated and solved in both scenarios: exactly and partially known parameters. The sub-optimal tests are obtained and their statistical properties are investigated. Preliminary results for the isolation task are also obtained. The proposed algorithms are applied to the detection and isolation of malicious attacks on a simple SCADA water network
APA, Harvard, Vancouver, ISO, and other styles
26

Shen, Yang. "Comportement et endommagement des alliages d’aluminium 6061-T6 : approche micromécanique." Thesis, Paris, ENMP, 2012. http://www.theses.fr/2012ENMP0089/document.

Full text
Abstract:
L'alliage d'aluminium 6061-T6 a été retenu pour la fabrication du caisson-coeur du futur réacteur expérimental Jules Horowitz (RJH). L'objectif de cette thèse est de comprendre et modéliser le comportement et l'endommagement de cet alliage en traction et en ténacité, ainsi que l'origine de l'anisotropie d'endommagement. Il s'agit de faire le lien entre la microstructure et l'endommagement du matériau à l'aide d'une approche micromécanique. Pour ce faire, la microstructure de l'alliage, la structure granulaire et es précipités grossiers ont été caractérisés en utilisant des analyses surfaciques (Microscopie Électronique à Balayage) et volumiques (tomographie/laminographie X). Le mécanisme d'endommagement a été identifié par des essais de traction sous MEB in-situ, des essais de tomographie X ex-situ et des essais de laminographie X in-situ pour différents taux de triaxialité. Ces observations ont notamment permis de montrer que la germination des cavités sur les précipités grossiers de type Mg2Si est plus précoce que sur les intermétalliques au fer. Le scénario identifié et les grandeurs mesurées ont ensuite permis de développer un modèle d'endommagement couplé, basé sur l'approche locale de la rupture, de type GTN intégrant la germination, la croissance et la coalescence des cavités. Le lien entre l'anisotropie d'endommagement et de forme/répartition des précipités a pu être montré. Cette anisotropie microstructurale modifie les mécanismes : Pour une sollicitation dans le sens long l'endommagement est majoritairement intergranulaire alors que dans le sens travers on observe un endommagement mixte intergranulaire et intragranulaire. La prise en compte des mesures de l'endommagement dans la simulation a permis d'expliquer l'anisotropie d'endommagement. Ce travail servira de référence pour les études futures qui seront menées sur le matériau irradié
The AA6061-T6 aluminum alloy was chosen as the material for the core vessel of the future Jules Horowitz testing reactor (JHR). The objective of this thesis is to understand and model the tensile and fracture behavior of the material, as well as the origin of damage anisotropy. A micromechanical approach was used to link the microstructure and mechanical behavior. The microstructure of the alloy was characterized on the surface via Scanning Electron Microscopy and in the 3D volume via synchrotron X-ray tomography and laminography. The damage mechanism was identified by in-situ SEM tensile testing, ex-situ X-ray tomography and in-situ laminography on different levels of triaxiality. The observations have shown that damage nucleated at lower strains on Mg2Si coarse precipitates than on iron rich intermetallics. The identified scenario and the in-situ measurements were then used to develop a coupled GTN damage model incorporating nucleation, growth and coalescence of cavities formed by coarse precipitates. The relationship between the damage and the microstructure anisotropies was explained and simulated
APA, Harvard, Vancouver, ISO, and other styles
27

Monfret, Tony. "Utilisation de la composante verticale du mode fondamental des ondes de rayleigh du manteau : etude de la source sismique et modelisation tri-dimensionnelle de la terre." Paris 7, 1988. http://www.theses.fr/1988PA077123.

Full text
Abstract:
La methode developpee permet, tout en estimant le temps de source du processus de rupture, de contraindre la profondeur du centroide et de determiner le tenseur de moments sismiques. Lorsque la source ne peut plue etre assimilee a un point, une methode proposee permet de mesurer les parametres associes a la propagation du front de rupture. Enfin un modele tri-dimensionnel de terre qui est peu different des modeles actuels est propose
APA, Harvard, Vancouver, ISO, and other styles
28

Karavelić, Emir. "Stochastic Galerkin finite element method in application to identification problems for failure models parameters in heterogeneous materials." Thesis, Compiègne, 2019. http://www.theses.fr/2019COMP2501.

Full text
Abstract:
Cette thèse traite de rupture localisée de structures construites en matériau composite hétérogène, comme le béton, à deux échelles différentes. Ces deux échelles sont connectées par le biais de la mise à l'échelle stochastique, où toute information obtenue à l'échelle méso est utilisée comme connaissance préalable à l'échelle macro. À l'échelle méso, le modèle de réseau est utilisé pour représenter la structure multiphasique du béton, à savoir le ciment et les granulats. L'élément de poutre représenté par une poutre Timoshenko 3D intégrée avec de fortes discontinuités assure un maillage complet indépendance de la propagation des fissures. La géométrie de la taille des agrégats est prise en accord avec la courbe EMPA et Fuller tandis que la distribution de Poisson est utilisée pour la distribution spatiale. Les propriétés des matériaux de chaque phase sont obtenues avec une distribution gaussienne qui prend en compte la zone de transition d'interface (ITZ) par l'affaiblissement du béton. À l'échelle macro, un modèle de plasticité multisurface est choisi qui prend en compte à la fois la contribution d'un écrouissage sous contrainte avec une règle d'écoulement non associative ainsi que des composants d'un modèle d'adoucissement de déformation pour un ensemble complet de différents modes de défaillance 3D. Le modèle de plasticité est représenté par le critère de rendement Drucker-Prager, avec une fonction potentielle plastique similaire régissant le comportement de durcissement tandis que le comportement de ramollissement des contraintes est représenté par le critère de St. Venant. La procédure d'identification du modèle macro-échelle est réalisée de manière séquentielle. En raison du fait que tous les ingrédients du modèle à l'échelle macro ont une interprétation physique, nous avons fait l'étalonnage des paramètres du matériau en fonction de l'étape particulière. Cette approche est utilisée pour la réduction du modèle du modèle méso-échelle au modèle macro-échelle où toutes les échelles sont considérées comme incertaines et un calcul de probabilité est effectué. Lorsque nous modélisons un matériau homogène, chaque paramètre inconnu du modèle réduit est modélisé comme une variable aléatoire tandis que pour un matériau hétérogène, ces paramètres de matériau sont décrits comme des champs aléatoires. Afin de faire des discrétisations appropriées, nous choisissons le raffinement du maillage de méthode p sur le domaine de probabilité et la méthode h sur le domaine spatial. Les sorties du modèle avancé sont construites en utilisant la méthode de Galerkin stochastique fournissant des sorties plus rapidement le modèle avancé complet. La procédure probabiliste d'identification est réalisée avec deux méthodes différentes basées sur le théorème de Bayes qui permet d'incorporer de nouvelles bservations générées dans un programme de chargement particulier. La première méthode Markov Chain Monte Carlo (MCMC) est identifiée comme mettant à jour la mesure, tandis que la deuxième méthode Polynomial Chaos Kalman Filter (PceKF) met à jour la fonction mesurable. Les aspects de mise en œuvre des modèles présentés sont donnés en détail ainsi que leur validation à travers les exemples numériques par rapport aux résultats expérimentaux ou par rapport aux références disponibles dans la littérature
This thesis deals with the localized failure for structures built of heterogeneous composite material, such as concrete, at two different scale. These two scale are latter connected through the stochastic upscaling, where any information obtained at meso-scale are used as prior knowledge at macro-scale. At meso scale, lattice model is used to represent the multi-phase structure of concrete, namely cement and aggregates. The beam element represented by 3D Timoshenko beam embedded with strong discontinuities ensures complete mesh independency of crack propagation. Geometry of aggregate size is taken in agreement with EMPA and Fuller curve while Poisson distribution is used for spatial distribution. Material properties of each phase is obtained with Gaussian distribution which takes into account the Interface Transition Zone (ITZ) through the weakening of concrete. At macro scale multisurface plasticity model is chosen that takes into account both the contribution of a strain hardening with non-associative flow rule as well as a strain softening model components for full set of different 3D failure modes. The plasticity model is represented with Drucker-Prager yield criterion, with similar plastic potential function governing hardening behavior while strain softening behavior is represented with St. Venant criterion. The identification procedure for macro-scale model is perfomed in sequential way. Due to the fact that all ingredients of macro-scale model have physical interpretation we made calibration of material parameters relevant to particular stage. This approach is latter used for model reduction from meso-scale model to macro-scale model where all scales are considered as uncertain and probability computation is performed. When we are modeling homogeneous material each unknown parameter of reduced model is modeled as a random variable while for heterogeneous material, these material parameters are described as random fields. In order to make appropriate discretizations we choose p-method mesh refinement over probability domain and h-method over spatial domain. The forward model outputs are constructed by using Stochastic Galerkin method providing outputs more quickly the the full forward model. The probabilistic procedure of identification is performed with two different methods based on Bayes’s theorem that allows incorporating new observation generated in a particular loading program. The first method Markov Chain Monte Carlo (MCMC) is identified as updating the measure, whereas the second method Polynomial Chaos Kalman Filter (PceKF) is updating the measurable function. The implementation aspects of presented models are given in full detail as well as their validation throughthe numerical examples against the experimental results or against the benchmarks available from literature
APA, Harvard, Vancouver, ISO, and other styles
29

Tanays, Eric. "Approche algorithmique des conceptions geometriques et geotechniques de mines a ciel ouvert : application a la mine de carmaux (u.e. tarn, h.b.c.m., cdf)." Paris, ENMP, 1989. http://www.theses.fr/1989ENMP0142.

Full text
Abstract:
Le travail de recherche a pour objectif la realisation du logiciel degres destine a la prevision de la geometrie de la fosse d'une mine a ciel ouvert et la detection des risques de rupture. Il a ete mis en oeuvre sur le site de la "grande decouverte" de carmaux (tarn - france)
APA, Harvard, Vancouver, ISO, and other styles
30

El-Hassan, Assoum Nada. "Modélisation théorique et numérique de la localisation de la déformation dans les géomatériaux." Université Joseph Fourier (Grenoble), 1997. http://www.theses.fr/1997GRE10112.

Full text
Abstract:
Une etude theorique et numerique de la localisation de la deformation en bandes de cisaillement dans les geomateriaux est presentee. L'etude est realisee en deux volets : ? le premier volet concerne le phenomene de la localisation au moment de son declenchement en utilisant le modele cloe. Un critere de controlabilite en deformation plane a ete etabli pour le modele cloe. Une etude de sensibilite parametrique du critere de bifurcation de cloe roche a ete menee sur la marne de beaucaire. Une etude numerique de modelisation en grandes deformations de l'excavation d'un puits fore dans une couche des marnes raides (marne a hydrobies) a ete menee avec la loi cloe avec l'identification des parametres de cloe pour cette marne sur des essais elementaires de laboratoire. Le critere de bifurcation a permis de montrer que la localisation se declenchait en paroi, symetriquement ou non suivant l'isotropie du champ lointain, et plus tot dans le cas anisotrope. ? le second volet concerne le suivi de la localisation. Une etude theorique des milieux continus de second gradient unidimensionnels est realisee. La resolution analytique des equations constitutives, associees a un modele unidimensionnel developpe, est presentee dans le cadre des petites deformations. Une analyse numerique des equations d'equilibre a permis la determination d'une matrice de rigidite consistante ainsi que l'expression des forces hors equilibre. Un code elements finis unidimensionnel avec des elements finis conformes a ete developpe et valide. L'introduction du second gradient parait non satisfaisante pour assurer l'unicite des solutions. Un resultat important est l'independance du resultat vis a vis du maillage. Une etude theorique et une analyse numerique semblable a celle du cas monodimensionnel est ensuite realisee pour les milieux continus de second gradient bidimensionnels dans le cadre des grandes deformations.
APA, Harvard, Vancouver, ISO, and other styles
31

Barbato, Kelly Biancardini Gomes. "Efeitos do uso de antiinflamatório e do exercício aeróbico sobre a regeneração tecidual e perfil biomecânico do tendão calcâneo de ratos após ruptura completa." Universidade do Estado do Rio de Janeiro, 2011. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=3845.

Full text
Abstract:
Ruptura do tendão calcâneo é uma das lesões tendíneas mais frequentes. Embora a maioria dos trabalhos sugira que o exercício seja benéfico na cicatrização tendínea, não há consenso sobre o efeito do antiinflamatório neste contexto. Trabalhos experimentais tentam reproduzir lesão aguda deste tendão, em diferentes espécies animais. Neste estudo, descrevemos uma técnica de tenotomia completa do tendão calcâneo direito em ratos e, em seguida, avaliamos os efeitos do uso do antiinflamatório e do exercício aeróbico, isoladamente e em combinação, sobre a proliferação celular e o perfil biomecânico do tendão calcâneo, durante o processo de cicatrização após tenotomia. Estudo experimental com 156 ratos machos adultos, da raça Wistar, com idade média de 3 meses e peso médio de 300g. Após anestesia com tiopental e com auxílio da microscopia de luz, foi realizada incisão longitudinal posterior de cinco milímetros, em direção proximal, a partir da tuberosidade posterior do calcâneo da pata direita do rato. Foi feito corte transversal do tendão calcâneo, a sete milímetros da tuberosidade do calcâneo, com preservação do tendão plantar. Utilizamos as técnicas de Hematoxilina e Eosina, Picrosirius-red e Resorcina-fucsina de Weigert para avaliação da cicatrização tendínea e das fibras dos sistemas colágeno e elástico. Após a tenotomia, metade dos animais receberam tenoxicam intramuscular por 7 dias e no 8o dia iniciou-se protocolo de exercício em esteira na metade de cada grupo. Os ratos foram divididos aleatoriamente em 4 grupos de tratamento: A sem antiinflamatório E sem exercício (controle); B com antiinflamatório E com exercício; C sem antiinflamatório E com exercício; D com antiinflamatório E sem exercício. Os animais foram eutanasiados com 1, 2, 4 e 8 semanas após a tenotomia, para avaliação histológica pelo PCNA, e biomecânica através do teste de resistência à tração e da medida do ciclo locomotor. Foram realizados análise de variância, teste de Kruskal-Wallis e o método de Bonferroni, no programa R Project, versão 2.11.1. O tempo cirúrgico médio foi de 1 minuto e 24 segundos, sem complicações observadas até a 8a semana pós-operatória. Observamos proliferação celular e fibrilogênese com duas semanas, e diminuição da celularidade e das fibras elásticas na 8a semana, além de mudanças na organização estrutural do sistema colágeno. Encontramos pico da imunomarcação com PCNA na 2a semana em todos os grupos, exceto no grupo A, cujo pico aconteceu com 1 semana da tenotomia. Evidenciamos resistência à tração significativamente maior (p=0,02) nos ratos submetidos ao exercício, 8 semanas após ruptura. Nos grupos com antiinflamatório, observamos um ciclo locomotor mais estável durante todo o tempo avaliado. Consideramos a técnica cirúrgica experimental de tenotomia completa do tendão calcâneo, realizada com auxílio da microscopia de luz e preservação do tendão plantar, simples, rápida, com sinais de cicatrização tendínea normal e de fácil reprodução em ratos. O exercício aeróbico, iniciado precocemente após tenotomia completa do tendão calcâneo, é significativamente benéfico na sua recuperação biomecânica e o uso combinado com antiinflamatório confere maior estabilidade na marcha, o que pode proteger contra rerruptura tendínea em ratos
Achilles tendon rupture is one of the most frequent tendon injuries. Although most studies have shown the benefits of exercise on tendon regeneration, controversy still exists concerning non-steroidal antinflammatory drug (NSAID) effects in this context. Several experimental models have been used for the study of Achilles tendon injury. In this study, we describe the surgical technique of right Achilles tenotomy in rats and subsequently evaluate the effects of NSAID and aerobic exercise, in an isolated fashion and combined, on cell proliferation and biomechanical aspects of the Achilles tendon after tenotomy. Experimental study with 156 male Wistar rats with an average age of 3 months and with average weight of 300g. Surgical procedures were performed under light microscopy, after anesthesia with thiopental. A five millimeters posterior longitudinal incision was created, proximally directed, starting five millimeters proximal to the posterior calcaneal tuberosity. A complete tenotomy of the Achilles tendon was performed, seven millimeters away from the calcaneal tuberosity. The plantaris tendon was preserved. We used Hematoxilin and Eosin, Picrosirius-red and Weigerts Resorcin-fucsin to observe general tendon healing, especially regarding collagen and elastic fibers. After tenotomy, half of the rats received an intramuscular injection of tenoxican for 7 days and exercise was initiated on the 8th day for half the animals of each group. Rats were randomly divided into four treatment groups: A) no NSAID and no exercise; B) NSAID plus exercise; C) no NSAID, with exercise; D) NSAID and no exercise. Animals were sacrificed at 1, 2, 4 and 8 weeks after the tenotomy and cell proliferation was evaluated by immunohistochemistry for PCNA, biomechanical evaluation was performed with ultimate load and gait cycle analysis was also carried out. We used the test of analysis of variance, the Kruskal-Wallis test and also, Bonferroni method, in the R Project program 2.11.1. The mean operative time was one minute and 24 seconds, without complications observed until the 8th postoperative week. Histological studies showed cellular proliferation and fibrilogenesis at two weeks, with decreased amounts of cellularity and elastic fibers at the 8th week, besides changes in structural organization of collagen fibers. The highest intensity of PCNA immunostaining was found at 2 weeks in all groups except for group A (control) that had the highest intensity at 1 week. Animals submitted to exercise had significantly higher (P = 0.02) ultimate loads at 8 weeks after injury. The animals that received NSAID presented with a more stable gait cycle. The surgical technique described for complete Achilles tenotomy, under light microscopy and sparing the plantaris tendon, is simple and quick, shows signs of normal healing process, and it is easily reproducible in rats. Aerobic exercise, initiated early after a complete Achilles tendon tenotomy, was beneficial to the biomechanical aspects of the tendon during regeneration and the combined use of NSAID improved the gaits characteristics, which could be protective against reruptures
APA, Harvard, Vancouver, ISO, and other styles
32

Sánchez, Tizapa Sulpicio. "Experimental and numerical study of confined masonry walls under in-plane loads : case : guerrero State (Mexico)." Phd thesis, Université Paris-Est, 2009. http://tel.archives-ouvertes.fr/tel-00537380.

Full text
Abstract:
This research work proposes methods to rises the resistance and to evaluate the behavior of confined masonry walls built from clay solid bricks. These elements are widely used in Guerrero State (México) to build masonry structures, which should resist high lateral loads because of the serious seismic hazard. Therefore, a large experimental program to evaluate the mechanical properties of bricks and masonry currently required in the design process and masonry analysis was developed. To rises the masonry resistance and to counteract the influence of the compressive strength of the pieces on the masonry behavior, a high compressive strength mortar and a metallic reinforcement inside the joints were used. With respect to referenced values of the mechanical properties, some were similar and others were twice bigger. In this country zone, the first three tests under lateral load on full-scale confined masonry walls built from clay solid bricks were carried out in order to evaluate its behavior. A reinforcement composed by metallic hexagonal mesh-mortar coat was placed on the faces of two walls to rise or to restore the resistance. The walls showed good behavior and the reinforcement had adequate structural efficiency. Numerical models of panels and walls built by using the experimental data evaluated the envelope resistance, the failure mode and showed the influence of the mechanical properties of the pieces and joints on the global behavior. Two models had metallic reinforcement inside the joints. In addition, a constitutive law of the masonry defined from experimental results allowed to elaborate a simple model, which results were concordant with respect to the experimental results and similar to those calculated by complex models. Finally, two simplified models to evaluate the resistance of confined masonry walls by considering the failure plane on the wall diagonal were developed. One supposes the masonry failure by shear effect and the other supposes the masonry failure by induced tension. The ratio theoretical resistance vs. experimental resistance was adequate for walls built from different materials and tested under different loads, which had ratio Height/Length ranging from 0.74 to 1.26
APA, Harvard, Vancouver, ISO, and other styles
33

Diaz, Pulgar Luis Gerardo. "Lightning induced voltages in cables of power production centers." Thesis, Limoges, 2016. http://www.theses.fr/2016LIMO0093/document.

Full text
Abstract:
Lorsqu’un bâtiment d’un centre de production d’électricité est frappé par la foudre, il se produit une dangereuse circulation de courants dans tous les composants connectés au bâtiment: les murs, le réseau de terre, et les câbles sortant du bâtiment. L’intérêt du présent travail est d’étudier les tensions transitoires aux extrémités de ces câbles, en particulier des câbles contrôle mesure, dans la mesure où ces câbles sont connectés à des équipements électroniques susceptibles d’être endommagés par des perturbations électromagnétiques engendrées par la foudre. Une approche basée sur la résolution numérique des équations de Maxwell via une méthode FDTD est adoptée. Notamment le formalisme de Holland et Simpson est utilisé pour modéliser toutes les structures constituées d’un réseau de fils minces: l’armature métallique du bâtiment, la grille en cuivre du réseau de terre, la galerie de béton et le câble coaxial de contrôle mesure. Une validation des modèles électromagnétiques développés pour chaque composant du site industriel est présentée. Une analyse de sensibilité est conduite pour déterminer l’influence des paramètres du système. En outre, la technique des plans d’expérience est utilisée pour générer un méta-modèle qui prédit la tension maximale induite aux extrémités du câble en fonction des paramètres les plus influents. Cela représentent un outil de calcul précis et informatiquement efficace pour évaluer la performance foudre des câbles de contrôle et de mesure
When lightning strikes a building in a Power Generation Center, dangerous currents propagates through all the components connected to the building structure: The walls, the grounding grid, and the cables leaving the building. It is the interest of this work to study the transient voltages at the terminations of these cables external to the building.Particularly, the Instrumentation and Measure (IM) cables, since they are connected to electronic equipment susceptible of damage or malfunctioning due to lightning ElectroMagnetic perturbations. A full wave approach based on the numerical solution of Maxwell’s equations through the FDTD algorithm is adopted. Notably, the formalism of Holland and Simpson is used to model all the structures composed of thin wires: the building steel structure, the grounding copper grid, the concrete cable ducts and the coaxial IM cables. A validation of the model developed for each component is presented. A sensitivity analysis is performed in order to the determine the main parameters that configure the problem. Also, the Design of Experiments (DoE) technique is used to generate a meta-model that predicts the peak induced voltages in the cable terminations, as a function of the main parameters that configure the industrial site. This represents an accurate, and computationally efficient tool to assess lightning performance of IM cables
APA, Harvard, Vancouver, ISO, and other styles
34

Uribe, Suarez Diego Alejandro. "Combinaison d’éléments cohésifs et remaillage pour gérer la propagation arbitraire du chemin de fissure : des matériaux fragiles à l’analyse de fatigue thermique des petits corps du système solaire." Thesis, Université Côte d'Azur, 2021. http://www.theses.fr/2021COAZ4047.

Full text
Abstract:
La présente thèse de doctorat a pour objectif d’améliorer la modélisation du phénomène de rupture dans les matériaux fragiles. Elle porte une attention particulière aux mécanismes de rupture des objets célestes. L’un des problèmes posant le plus de défis aux scientifiques spécialisés dans l’étude de la mécanique de la rupture est la propagation d’une fissure dans un maillage éléments finis, et ce pour des chemins arbitraires. Dans cette étude, ce problème est abordé en utilisant une technique de remaillage avancée utilisant des éléments finis cohésifs permettant la propagation de fissures suivant des directions arbitraires et indépendantes du maillage. La direction de la fissure est calculée suivant le critère du taux de restitution d’énergie maximal, implémentée à l’aide d’un modèle éléments finis et de la méthode Gθ. Les effets de différents paramètres numériques et physiques relatifs à la fissure ou à l’énergie libérée lors de la rupture sont investigués.Bien que différentes preuves de fissures et/ou fragments à la surface de corps célestes de notre système solaire induits par des variations cycliques de la température ont été détaillées, la compréhension de ces mécanismes de propagation dans des objets célestes reste très parcellaire. La fracturation thermique de roches en surface associée à l’impact de micro-météorites peutéventuellement conduire à la rupture complète de fragments de matière et à la production de régolithes. Cette dernière est définie comme la couche de matériau non consolidée qui recouvre la surface des planètes. Afin de comprendre ces mécanismes, l’étude s’attarde sur un exemple précis, celui de l’astéroïde (101955) Bennu. Pour ce faire, elle utilise un modèle thermoélastique couplé avec un modèle linéaire élastique de mécanique de la rupture permettant de considérer les variations cycliques de température liées aux alternances jour/nuit. En utilisant cette méthodologie, il a été observé que les fissures se propagent préférentiellement dans les directions : Nord vers Sud, Nord-Est vers Sud-Ouest et Nord-Ouest vers Sud-Est. Finalement, une analyse de fatigue est effectuée afin d’estimer la vitesse de croissance de la fissure.Les méthodes détaillées précédemment ont été implémentées dans Cimlib, une librairie C++ dévelopée au CEMEF. Au sein de cette librairie, une méthode permettant la propagation d’une ou plusieurs fissures, suivant des directions arbitraires, en 2D et au sein d’un environnement de calcul en parallèle est à présent disponible. Concernant l’extension de cette méthode à des problèmes 3D, une première approche a été mise au point. Elle permet de propager un front de fissure suivant une direction arbitaire. La structure développée permet d’ouvrir de nouvelles possibilités pour de nombreuses applications, telles que l’étude de la rupture de matériaux composites à l’échelle mesoscopique
The present PhD thesis aims at providing a better modeling of fracture phenomenon in brittle materials, with special attention focused on fracture processes taking place in astronomical bodies. One of the most challenging issues in computational fracture mechanics is the propagation of a crack through a finite element mesh for arbitrary crack paths. In this work, this problem is approached by means of an advanced remeshing technique that propagates a crack using cohesive elements through arbitrary directions (mesh-independent). The crack direction is computed using the maximal energy release rate criterion which is implemented using finite elements and the Gθ method. The effects of different numerical and physical parameters regarding the crack path and fracture energy have been investigated. Even though it has been shown that temperature cycles on airless bodies of our Solar System can cause damaging of surface materials (Thermal cracking), propagation mechanisms in the case of space objects are still poorly understood. Thermal cracking of surface rocks, in addition to the impact of micrometeorties, can eventually lead to rocks’ breakup and produce fresh regolith, the latter being the layer of unconsolidated material that covers planetary surfaces. For this reason, the present work combines a thermoelasticity model together with linear elastic fracture mechanics theory to predict fracture propagation in the presence of thermal gradients generated by diurnal temperature cycling and under conditions similar to those existing on asteroid (101955) Bennu. Using the implemented methodologies, it is found that in asteroid Bennu, cracks preferentially propagate in the North to South (N-S), in the North-East to South-West (NE-SW) and in the North-West to South-East (NW-SE) directions. Finally, thermal fatigue analysis was performed in order to estimate the crack growth rate.Aforementioned methodologies have been implemented in Cimlib, a C++ in-house finite element library developed at CEMEF. Inside Cimlib, a methodology allowing two-dimensional crack propagation through arbitrary directions with the option of handling multiple cracks in the domain and inside a parallel environment was developed. Regarding three-dimensional scenario, a first approach where a crack front was propagated through an arbitrary direction was achieved. Concerning numerical modeling of crack propagation, the developed framework opens new possibilities for various applications such as composites cracking at the meso-scale
APA, Harvard, Vancouver, ISO, and other styles
35

Rukavina, Ivan. "Cyber-physics intrinsic modelling for smart systems." Thesis, Compiègne, 2021. http://bibliotheque.utc.fr/EXPLOITATION/doc/IFD/2021COMP2581.

Full text
Abstract:
Dans le cadre de cette thèse, une approche de calcul de couplage multi-échelle et multi-physique en 2D et en 3D est présentée. La modélisation multi-échelle d’une structure consiste de l’échelle macro qui représente la réponse homogénéisée de la structure entière, tandis que l’échelle micro peut capturer les détails du comportement à la petite échelle du matériau, où des mécanismes inélastiques, tels que la plasticité ou l’endommagement, peuvent être pris en compte. L’intérieur de chaque macro-élément est rempli par le maillage à l’échelle micro qui s’y adapte entièrement. Les deux échelles sont couplées à travers le champ de déplacements imposé à l’interface. Le calcul par éléments finis est effectué, en utilisant une procédure de solution operator-split sur les deux échelles. En 2D, une discontinuité dans le champ de déplacements est introduite à l’échelle macro dans un élément fini Q4, pour pouvoir capturer l’adoucissement comportement d’un matériau piézoélectrique. Un degré de liberté supplémentaire qui représente le voltage est ajouté aux noeuds des macro-éléments de tétraèdre et d’hexaèdre en 3D. La poutre de Timoshenko comportant un modèle de commutation de polarisation est utilisée à l’échelle micro. Également, une formulation multi-échelle de Hellinger-Reissner a été développée et implémentée pour un simple patch test en électrostatique. La procédure proposée est mise en œuvre dans le logiciel de calcul par éléments finis FEAP - Finite Element Analysis Program. Pour simuler le comportement aux deux échelles, FEAP est modifié, et deux versions différentes du code sont obtenues - macroFEAP et microFEAP. Le couplage de ces codes est réalisé avec Component Template Library - CTL qui rend possible l’échange d’informations entre les deux échelles. Les capacités de cette approche multi-échelle en 2D et en 3D sont démontrées dans un environnement purement mécanique, mais aussi multi-physique. La formulation théorique et l’application algorithmique sont présentées, et les avantages de la méthode multi-échelle pour la modélisation des matériaux hétérogènes sont illustrés avec plusieurs exemples numériques
In this thesis, a multi-scale and multi-physics coupling computation procedure for a 2D and 3D setting is presented. When modeling the behavior of a structure by a multi-scale method, the macro-scale is used to describe the homogenized response of the structure, and the micro-scale to describe the details of the behavior on the smaller scale of the material where some inelastic mechanisms, like damage or plasticity, can be taken into account. The micro-scale mesh is defined for each macro-scale element in a way to fit entirely inside it. The two scales are coupled by imposing a constraint on the displacement field over their interface. The computation is performed using the operator split solution procedure on both scales, using the standard finite element method. In a 2D setting, an embedded discontinuity is implemented in the Q4 macroscale element to capture the softening behavior happening on the micro-scale. For the micro-scale element, a constant strain triangle (CST) is used. In a 3D setting, a macro-scale tetrahedral and hexahedral elements are developed, while on the micro-scale Timoshenko beam finite elements are used. This multi-scale methodology is extended with a multi-physics functionality, to simulate the behavior of a piezoelectric material. An additional degree of freedom (voltage) is added on the nodes of the 3D macro-scale tetrahedral and hexahedral elements. For the micro-scale element, a Timoshenko beam element with added polarization switching model is used. Also, a multi-scale Hellinger- Reissner formulation for electrostatics has been developed and implemented for a simple electrostatic patch test. For implementing the proposed procedure, Finite Element Analysis Program (FEAP) is used. To simulate the behavior on both macro and micro-scale, FEAP is modified and two different version of FEAP code are implemented – macroFEAP and microFEAP. For coupling, the two codes are exchanging information between them, and Component Template Library (CTL) is used. The capabilities of the proposed multi-scale approach in a 2D and 3D pure mechanics settings, but also multi-physics environment have been shown. The theoretical formulation and algorithmic implementation are described, and the advantages of the multi-scale approach for modeling heterogeneous materials are shown on several numerical examples
APA, Harvard, Vancouver, ISO, and other styles
36

(12804799), Bruce Arthur Jordan. "Duration of load behaviour of an aligned strand wood composite (ASC)." Thesis, 1994. https://figshare.com/articles/thesis/Duration_of_load_behaviour_of_an_aligned_strand_wood_composite_ASC_/20010698.

Full text
Abstract:

An aligned strand, wood composite (ASC) material was subjected to a two year duration of load (DOL) study. The experimental work was conducted in Mt. Gambier South Australia from a purpose built shed housing 42 back to back (double) vertically oriented specimen test rigs in an atmosphere not controlled for temperature and relative humidity. A total of 244 specimens were tested in two population groups, each group divided into two subgroups. One subgroup each for short term static Modulus of Rupture (MOR) and Modulus of Elasticity (MOE), and the other for long term bending strength and stiffness under stresses varying from 0.4MOR to 0.9MOR.

Creep and creep - rupture observations were taken over a two year period providing an extensive database for the evaluation of DOL properties. Monte Carlo simulations were used to give confidence in the process of assigning MOR values to the long term specimens using a statistical matched distribution technique for long term strength evaluation.

Creep and creep - rupture responses for the ASC were similar in effect but fifty percent greater in magnitude than that predicted for solid seasoned timber by AS1720.1-1988, the Australian Timber Structures Code. Creep design multipliers, and long term strength design multipliers in both working stress and limit state design formats were derived for the ASC.

A limiting strain criterion was established for the ASC and was observed to be independent of time under load and applied stress intensity. A failure strain model was presented for the ASC in a form enabling the determination of a limiting deflection for flexural (beam) members at failure.

APA, Harvard, Vancouver, ISO, and other styles
37

Sypus, Matthew. "Models of tsunamigenic earthquake rupture along the west coast of North America." Thesis, 2019. http://hdl.handle.net/1828/11436.

Full text
Abstract:
The west coast of North America faces the risk of tsunamis generated by seismic rupture in three regions, namely, the Cascadia subduction zone extending from southwestern British Columbia to northern California, the southern Queen Charlotte margin in the Haida Gwaii area, and the Winona Basin just northeast of Vancouver Island. In this thesis, I construct tsunamigenic rupture models with a 3-D elastic half-space dislocation model for these three regions. The tsunami risk is the highest along the Cascadia coast, and many tsunami source models have been developed and used in the past. In efforts to improve the Cascadia tsunami hazard assessment, I use an updated Cascadia fault geometry to create 9 tsunami source models which include buried, splay-faulting, and trench-breaching rupture. Incorporated in these scenarios is a newly-proposed splay fault based on minor evidence found in seismic reflection images off Vancouver Island. To better understand potential rupture boundaries of the Cascadia megathrust rupture, I also model deformation caused by the 1700 C.E. great Cascadia earthquake that fit updated microfossil-based paleoseismic coastal subsidence estimates. These estimates validate the well-accepted along-strike heterogenic rupture of the 1700 earthquake but suggest greater variations in subsidence along the coast. It is recognized that the Winona Basin area just north of the Cascadia subduction zone may have the potential to host a tsunamigenic thrust earthquake, but it has not been formally included in tsunami hazard assessments. There is a high degree of uncertainty in the tectonics of the area, the presence of a subduction “megathrust”, fault geometry, and rupture boundaries. Assuming worst-case scenarios and considering the uncertainties, I construct a fault geometry using seismic images and generate six tsunami sources with buried and trench-breaching rupture in which downdip rupture extent is varied. The Mw 7.8 2012 Haida Gwaii earthquake and its large tsunami demonstrated the presence of a subduction megathrust and its capacity of hosting tsunamigenic rupture, but little has been done to include future potential thrust earthquakes in the Haida Gwaii region in tsunami hazard assessment. To fill this knowledge gap, I construct a new megathrust geometry using seismic reflection images and receiver-function results and produce nine tsunami sources for Haida Gwaii, which include buried and trench-breaching ruptures. In the strike direction, the scenarios include long ruptures from mid-way between Haida Gwaii and Vancouver Island to mid-way between Haida Gwaii and the southern tip of Alaskan Panhandle, and shorter rupture scenarios north and south of the main rupture of the 2012 earthquake. For all the tsunami source and paleoseismic scenarios, I also calculate stress drop along the fault. Comparison of the stress drop results with those of real megathrust earthquakes worldwide indicates that these models are mechanically realistic.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
38

Wang, Pei-Ling. "Rupture models of the great 1700 Cascadia earthquake based on microfossil paleoseismic observations." Thesis, 2012. http://hdl.handle.net/1828/4167.

Full text
Abstract:
Past earthquake rupture models used to explain paleoseismic estimates of coastal subsidence during the great AD 1700 Cascadia earthquake have assumed a uniform slip distribution along the megathrust. Here, we infer heterogeneous slip for the Cascadia margin in AD 1700 that is analogous to slip distributions during instrumentally recorded great subduction earthquakes worldwide. The assumption of uniform distribution in previous rupture models was due partly to the large uncertainties of available paleoseismic data used to constrain the models. In this work, we use more precise estimates of subsidence in 1700 from detailed tidal microfossil studies. We develop a 3-D elastic dislocation model that allows the slip to vary both along strike and in the dip direction. Despite uncertainties in the updip and downdip slip extents, the more precise subsidence estimates are best explained by a model with along-strike slip heterogeneity, with multiple patches of high moment release separated by areas of low moment release. For example, in AD 1700 there was very little slip near Alsea Bay, Oregon (~ 44.5°N), an area that coincides with a segment boundary previously suggested on the basis of gravity anomalies. A probable subducting seamount in this area may be responsible for impeding rupture during great earthquakes. Our results highlight the need for precise, high-quality estimates of subsidence or uplift during prehistoric earthquakes from the coasts of southern British Columbia, northern Washington (north of 47°N), southernmost Oregon, and northern California (south of 43°N), where slip distributions of prehistoric earthquakes are poorly constrained.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
39

(13754247), Jenelle Tarlington. "Softwood timber poles: Proving of strength and stiffness." Thesis, 1998. https://figshare.com/articles/thesis/Softwood_timber_poles_Proving_of_strength_and_stiffness/21048196.

Full text
Abstract:

 The method of classification of softwood timber poles within Australia is adopted directly from the United States of America, whereby a pole is classified by its groundline circumference which was determined from a given pole tip load and a fibre stress value (as determined from previous testing). Consequently, the derivation of the basis for classification is based on strength and stiffness values determined from the testing of American timber. This study was conducted to determine the strength and stiffness of Queensland plantation grown softwood. 

The strength and stiffness of timber is expressed by the Modulus of Rupture (MOR) and Modulus of Elasticity (MOE), respectively. 

The samples selected for testing were two lots of 40 specimens of the species Slash Pine (Pinus Elliotti). One sample consisted of poles 9.1m in length, the other 7.6m in length. The species and length of the specimens represent the product of timber poles most commonly utilised by industry. The size of the sample was determined so to be both economic and also to produce results which were a valid indication of the entire population. 

Specimens were tested by cantilever loading. Full size poles were placed horizontally into a rigid clamp. A load was then applied to the tip of the pole until failure occurred. The load at failure and corresponding deflection were measured, from which the MOR and MOE of each pole was determined.  

APA, Harvard, Vancouver, ISO, and other styles
40

Carty, Dillon Matthew. "An Analysis of Boosted Regression Trees to Predict the Strength Properties of Wood Composites." 2011. http://trace.tennessee.edu/utk_gradthes/954.

Full text
Abstract:
The forest products industry is a significant contributor to the U.S. economy contributing six percent of the total U.S. manufacturing gross domestic product (GDP), placing it on par with the U.S. automotive and plastics industries. Sustaining business competitiveness by reducing costs and maintaining product quality will be essential in the long term for this industry. Improved production efficiency and business competitiveness is the primary rationale for this work. A challenge facing this industry is to develop better knowledge of the complex nature of process variables and their relationship with final product quality attributes. Quantifying better the relationships between process variables (e.g., press temperature) and final product quality attributes plus predicting the strength properties of final products are the goals of this study. Destructive lab tests are taken at one to two hour intervals to estimate internal bond (IB) tensile strength and modulus of rupture (MOR) strength properties. Significant amounts of production occur between destructive test samples. In the absence of a real-time model that predicts strength properties, operators may run higher than necessary feedstock input targets (e.g., weight, resin, etc.). Improved prediction of strength properties using boosted regression tree (BRT) models may reduce the costs associated with rework (i.e., remanufactured panels due to poor strength properties), reduce feedstocks costs (e.g., resin and wood), reduce energy usage, and improve wood utilization from the valuable forest resource. Real-time, temporal process data sets were obtained from a U.S. particleboard manufacturer. In this thesis, BRT models were developed to predict the continuous response variables MOR and IB from a pool of possible continuous predictor variables. BRT model comparisons were done using the root mean squared error for prediction (RMSEP) and the RMSEP relative to the mean of the response variable as a percent (RMSEP%) for the validation data set(s). Overall, for MOR, RMSEP values ranged from 0.99 to 1.443 MPa, and RMSEP% values ranged from 7.9% to 11.6%. Overall, for IB, RMSEP values ranged from 0.074 to 0.108 MPa, and RMSEP% values ranged from 12.7% to 18.6%.
APA, Harvard, Vancouver, ISO, and other styles
41

Naguit, Muriel. "Towards Earthquake-resilient Buildings: Rupture Process & Exposure/Damage Analysis of the 2013 M7.1 Bohol Philippines Earthquake." Phd thesis, 2017. http://hdl.handle.net/1885/117284.

Full text
Abstract:
The strong ground shaking due to the Mw7.1 Bohol Philippines earthquake left a significant imprint on its built environment. Two key factors defining this event include the wide spread of seismic intensities inferred to have shaken the island and the extensive building damage. These make the Bohol Earthquake an important opportunity to improve knowledge on building fragility and vulnerability. However, this entails a statistical description of building damage and a reliable source model for accurate estimation of earthquake ground motion. To this end, an extensive survey was conducted leading to a robust description of over 25,000 damaged and undamaged structures. This comprehensive database represents a mix of construction types at various intensity levels, in both urban and rural settings. For the ground motion estimation, the geometry and slip distribution of the finite source models were based on the analysis of SAR data, aftershocks and tele-seismic waveforms. Ground motion fields were simulated and compared using two methods including the stochastic modeling and a suite of ground motion prediction equations. The intensity-converted ground motions were calibrated and associated with the exposure-damage database to derive the empirical fragility and vulnerability models for typical building types in Bohol. These newly-derived models were used to validate the building fragility and vulnerability models already in use in the Philippines. This post-event assessment emphasizes the importance of assembling an exposure-damage database whenever damaging earthquakes occur. The sensitivity of fragility functions to ground motion inputs is also highlighted. Results indicate that the pattern of damage is best captured in the stochastic finite-fault simulation, although the Zhao et al. (2006) ground motion model registers a comparable range of ground motions. Constraints were placed on seismic building fragility and vulnerability models, which can promote more effective implementation of building regulations and construction practices as well as to deliver credible impact forecasts.
APA, Harvard, Vancouver, ISO, and other styles
42

Wentzel, Maximilian. "Process optimization of thermal modification of Chilean Eucalyptus nitens plantation wood." Doctoral thesis, 2019. http://hdl.handle.net/11858/00-1735-0000-002E-E5A0-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Caston, Megan. "Tsunamigenic potential of crustal faults in the southern Strait of Georgia and Boundary Bay." Thesis, 2021. http://hdl.handle.net/1828/13351.

Full text
Abstract:
In this thesis, I constrain rupture scenarios of active crustal faults in the southern Strait of Georgia and Boundary Bay in order to assess their tsunamigenic potential. The NW-SE-trending Drayton Harbor, Birch Bay, and Sandy Point faults had been previously identified on the southern side of Boundary Bay from aeromagnetic, LiDAR, and paleoseismic data; all show evidence of abrupt vertical Holocene displacements. South of Boundary Bay, the E-W-trending Skipjack Island fault zone was recently mapped on the basis of multibeam sonar imagery and seismic reflection data, with evidence for Holocene offsets of the seafloor and subsurface sediments. In addition, the Fraser River Delta fault had been hypothesized on the basis of a line of pockmarks and fluid seeps. Since these faults have only been recently mapped and identified as active, there is little information available on their structure, rupture style, and past large earthquakes. This makes it difficult to constrain rupture models to predict how fault slip could displace the seafloor during a large earthquake, for input to tsunami models. I analyzed relocated earthquake hypocentres, earthquake mechanisms, bathymetry, topography, and aeromagnetic, seismic reflection, and magnetotelluric data, to constrain the location, strike, dip, and rupture width of each fault. Correlations between datasets enabled mapping of northwestward extensions of the Sandy Point and Birch Bay faults, as well as delineating the previously unmapped Fraser River Delta fault. These offshore faults appear to be associated with infilled basement valleys in the subsurface, perhaps due to differential glacial erosion of weakened fault zone material. The Drayton Harbor fault could not be definitively mapped across Boundary Bay, so was excluded from the rupture modelling. Rupture styles were constrained using a combination of earthquake mechanisms, stress orientations, other evidence of regional compression, and vertical paleoseismic offsets. Where possible, paleoseismic displacements in past earthquakes were used to constrain the amount of fault slip for scenario earthquakes; empirical relations between fault slip and fault length or area were used to estimate displacements for the Skipjack Island and Fraser River Delta faults. The Birch Bay, Sandy Point, Skipjack Island, and Fraser River Delta faults all pose a significant tsunami risk to communities surrounding the southern Strait of Georgia and Boundary Bay. Considering both the originally mapped and extended lengths, the Birch Bay and Sandy Point faults could rupture in reverse-faulting earthquakes up to Mw 6.7-7.4 and 6.8-7.5, respectively, with seafloor uplift up to 2-2.5 m triggering damaging tsunami waves (up to at least 2.5 m) that could arrive onshore with little to no warning after the shaking begins. Similarly, the Fraser River Delta fault could host reverse or dextral-reverse slip earthquakes up to Mw 7.0-7.6, with seafloor uplift of 0.6-3.5 m. Ruptures on the Skipjack Island fault would likely have a larger strike-slip component; earthquakes of Mw 6.9-7.3 produce modelled seafloor uplift of 0.5-1.9 m. These results suggest that large tsunamigenic earthquakes on crustal faults in the southern Strait of Georgia should be included in future seismic and tsunami hazard assessments on both sides of the international border.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography