Academic literature on the topic 'Continuum fault models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Continuum fault models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Continuum fault models"

1

So, Byung-Dal, and Fabio A. Capitanio. "Self-consistent stick-slip recurrent behaviour of elastoplastic faults in intraplate environment: a Lagrangian solid mechanics approach." Geophysical Journal International 221, no. 1 (January 3, 2020): 151–62. http://dx.doi.org/10.1093/gji/ggz581.

Full text
Abstract:
SUMMARY Our understanding of the seismicity of continental interiors, far from plate margins, relies on the ability to account for behaviours across a broad range of time and spatial scales. Deformation rates around seismic faults range from the slip-on-fault during earthquakes to the long-term viscous deformation of surrounding lithosphere, thereby presenting a challenge to modelling techniques. The aim of this study was to test a new method to simulate seismic faults using a continuum approach, reconciling the deformation of viscoelastoplastic lithospheres over geological timescales. A von Mises yield criterion is adopted as a proxy for the frictional shear strength of a fault. In the elastoplastic fault models a rapid change in strength occurs after plastic yielding, to achieve stress–strain equilibrium, when the coseismic slip and slip velocity from the strain-rate response and size of the fault are calculated. The cumulative step-function shape of the slip and temporally partitioned slip velocity of the fault demonstrated self-consistent discrete fault motion. The implementation of elastoplastic faults successfully reproduced the conceptual models of seismic recurrence, that is strictly periodic and time- and slip-predictable. Elastoplastic faults that include a slip velocity strengthening and weakening with reduction of the time-step size during the slip stage generated yield patterns of coseismic stress changes in surrounding areas, which were similar to those calculated from actual earthquakes. A test of fault interaction captured the migration of stress between two faults under different spatial arrangements, reproducing realistic behaviours across time and spatial scales of faults in continental interiors.
APA, Harvard, Vancouver, ISO, and other styles
2

Cheng, Feng, Andrew V. Zuza, Peter J. Haproff, Chen Wu, Christina Neudorf, Hong Chang, Xiangzhong Li, and Bing Li. "Accommodation of India–Asia convergence via strike-slip faulting and block rotation in the Qilian Shan fold–thrust belt, northern margin of the Tibetan Plateau." Journal of the Geological Society 178, no. 3 (January 29, 2021): jgs2020–207. http://dx.doi.org/10.1144/jgs2020-207.

Full text
Abstract:
Existing models of intracontinental deformation have focused on plate-like rigid body motion v. viscous-flow-like distributed deformation. To elucidate how plate convergence is accommodated by intracontinental strike-slip faulting and block rotation within a fold–thrust belt, we examine the Cenozoic structural framework of the central Qilian Shan of northeastern Tibet, where the NW-striking, right-slip Elashan and Riyueshan faults terminate at the WNW-striking, left-slip Haiyuan and Kunlun faults. Field- and satellite-based observations of discrete right-slip fault segments, releasing bends, horsetail termination splays and off-fault normal faulting suggest that the right-slip faults accommodate block rotation and distributed west–east crustal stretching between the Haiyuan and Kunlun faults. Luminescence dating of offset terrace risers along the Riyueshan fault yields a Quaternary slip rate of c. 1.1 mm a−1, which is similar to previous estimates. By integrating our results with regional deformation constraints, we propose that the pattern of Cenozoic deformation in northeastern Tibet is compatible with west–east crustal stretching/lateral displacement, non-rigid off-fault deformation and broad clockwise rotation and bookshelf faulting, which together accommodate NE–SW India–Asia convergence. In this model, the faults represent strain localization that approximates continuum deformation during regional clockwise lithospheric flow against the rigid Eurasian continent.Supplementary material: Luminescence dating procedures and protocols is available at https://doi.org/10.17605/OSF.IO/CR9MNThematic collection: This article is part of the Fold-and-thrust belts and associated basins collection available at: https://www.lyellcollection.org/cc/fold-and-thrust-belts
APA, Harvard, Vancouver, ISO, and other styles
3

Mancini, Simone, Margarita Segou, Maximilian Jonas Werner, and Tom Parsons. "The Predictive Skills of Elastic Coulomb Rate-and-State Aftershock Forecasts during the 2019 Ridgecrest, California, Earthquake Sequence." Bulletin of the Seismological Society of America 110, no. 4 (June 23, 2020): 1736–51. http://dx.doi.org/10.1785/0120200028.

Full text
Abstract:
ABSTRACT Operational earthquake forecasting protocols commonly use statistical models for their recognized ease of implementation and robustness in describing the short-term spatiotemporal patterns of triggered seismicity. However, recent advances on physics-based aftershock forecasting reveal comparable performance to the standard statistical counterparts with significantly improved predictive skills when fault and stress-field heterogeneities are considered. Here, we perform a pseudoprospective forecasting experiment during the first month of the 2019 Ridgecrest (California) earthquake sequence. We develop seven Coulomb rate-and-state models that couple static stress-change estimates with continuum mechanics expressed by the rate-and-state friction laws. Our model parameterization supports a gradually increasing complexity; we start from a preliminary model implementation with simplified slip distributions and spatially homogeneous receiver faults to reach an enhanced one featuring optimized fault constitutive parameters, finite-fault slip models, secondary triggering effects, and spatially heterogenous planes informed by pre-existing ruptures. The data-rich environment of southern California allows us to test whether incorporating data collected in near-real time during an unfolding earthquake sequence boosts our predictive power. We assess the absolute and relative performance of the forecasts by means of statistical tests used within the Collaboratory for the Study of Earthquake Predictability and compare their skills against a standard benchmark epidemic-type aftershock sequence (ETAS) model for the short (24 hr after the two Ridgecrest mainshocks) and intermediate terms (one month). Stress-based forecasts expect heightened rates along the whole near-fault region and increased expected seismicity rates in central Garlock fault. Our comparative model evaluation not only supports that faulting heterogeneities coupled with secondary triggering effects are the most critical success components behind physics-based forecasts, but also underlines the importance of model updates incorporating near-real-time available aftershock data reaching better performance than standard ETAS. We explore the physical basis behind our results by investigating the localized shut down of pre-existing normal faults in the Ridgecrest near-source area.
APA, Harvard, Vancouver, ISO, and other styles
4

Jung, Jai K., Thomas D. O’Rourke, and Christina Argyrou. "Multi-directional force–displacement response of underground pipe in sand." Canadian Geotechnical Journal 53, no. 11 (November 2016): 1763–81. http://dx.doi.org/10.1139/cgj-2016-0059.

Full text
Abstract:
A methodology is presented to evaluate multi-directional force–displacement relationships for soil–pipeline interaction analysis and design. Large-scale tests of soil reaction to pipe lateral and uplift movement in dry and partially saturated sand are used to validate plane strain, finite element (FE) soil, and pipe continuum models. The FE models are then used to characterize force versus displacement performance for lateral, vertical upward, vertical downward, and oblique orientations of pipeline movement in soil. Using the force versus displacement relationships, the analytical results for pipeline response to strike-slip fault rupture are shown to compare favorably with the results of large-scale tests in which strike-slip fault movement was imposed on 250 and 400 mm diameter high-density polyethylene pipelines in partially saturated sand. Analytical results normalized with respect to maximum lateral force are provided on 360° plots to predict maximum pipe loads for any movement direction. The resulting methodology and dimensionless plots are applicable for underground pipelines and conduits at any depth, subjected to relative soil movement in any direction in dry or saturated and partially saturated medium to very dense sands.
APA, Harvard, Vancouver, ISO, and other styles
5

Armandine Les Landes, Antoine, Théophile Guillon, Mariane Peter-Borie, Arnold Blaisonneau, Xavier Rachez, and Sylvie Gentier. "Locating Geothermal Resources: Insights from 3D Stress and Flow Models at the Upper Rhine Graben Scale." Geofluids 2019 (May 12, 2019): 1–24. http://dx.doi.org/10.1155/2019/8494539.

Full text
Abstract:
To be exploited, geothermal resources require heat, fluid, and permeability. These favourable geothermal conditions are strongly linked to the specific geodynamic context and the main physical transport processes, notably stresses and fluid circulations, which impact heat-driving processes. The physical conditions favouring the setup of geothermal resources can be searched for in predictive models, thus giving estimates on the so-called “favourable areas.” Numerical models could allow an integrated evaluation of the physical processes with adapted time and space scales and considering 3D effects. Supported by geological, geophysical, and geochemical exploration methods, they constitute a useful tool to shed light on the dynamic context of the geothermal resource setup and may provide answers to the challenging task of geothermal exploration. The Upper Rhine Graben (URG) is a data-rich geothermal system where deep fluid circulations occurring in the regional fault network are the probable origin of local thermal anomalies. Here, we present a current overview of our team’s efforts to integrate the impacts of the key physics as well as key factors controlling the geothermal anomalies in a fault-controlled geological setting in 3D physically consistent models at the regional scale. The study relies on the building of the first 3D numerical flow (using the discrete-continuum method) and mechanical models (using the distinct element method) at the URG scale. First, the key role of the regional fault network is taken into account using a discrete numerical approach. The geometry building is focused on the conceptualization of the 3D fault zone network based on structural interpretation and generic geological concepts and is consistent with the geological knowledge. This DFN (discrete fracture network) model is declined in two separate models (3D flow and stress) at the URG scale. Then, based on the main characteristics of the geothermal anomalies and the link with the physics considered, criteria are identified that enable the elaboration of indicators to use the results of the simulation and identify geothermally favourable areas. Then, considering the strong link between the stress, fluid flow, and geothermal resources, a cross-analysis of the results is realized to delineate favourable areas for geothermal resources. The results are compared with the existing thermal data at the URG scale and compared with knowledge gained through numerous studies. The good agreement between the delineated favourable areas and the locations of local thermal anomalies (especially the main one close to Soultz-sous-Forêts) demonstrates the key role of the regional fault network as well as stress and fluid flow on the setup of geothermal resources. Moreover, the very encouraging results underline the potential of the first 3D flow and 3D stress models at the URG scale to locate geothermal resources and offer new research opportunities.
APA, Harvard, Vancouver, ISO, and other styles
6

Weismüller, Christopher, Janos L. Urai, Michael Kettermann, Christoph von Hagke, and Klaus Reicherter. "Structure of massively dilatant faults in Iceland: lessons learned from high-resolution unmanned aerial vehicle data." Solid Earth 10, no. 5 (October 24, 2019): 1757–84. http://dx.doi.org/10.5194/se-10-1757-2019.

Full text
Abstract:
Abstract. Normal faults in basalts develop massive dilatancy in the upper few hundred meters below the Earth's surface with corresponding interactions with groundwater and lava flow. These massively dilatant faults (MDFs) are widespread in Iceland and the East African Rift, but the details of their geometry are not well documented, despite their importance for fluid flow in the subsurface, geohazard assessment and geothermal energy. We present a large set of digital elevation models (DEMs) of the surface geometries of MDFs with 5–15 cm resolution, acquired along the Icelandic rift zone using unmanned aerial vehicles (UAVs). Our data present a representative set of outcrops of MDFs in Iceland, formed in basaltic sequences linked to the mid-ocean ridge. UAVs provide a much higher resolution than aerial/satellite imagery and a much better overview than ground-based fieldwork, bridging the gap between outcrop-scale observations and remote sensing. We acquired photosets of overlapping images along about 20 km of MDFs and processed these using photogrammetry to create high-resolution DEMs and orthorectified images. We use this dataset to map the faults and their damage zones to measure length, opening width and vertical offset of the faults and identify surface tilt in the damage zones. Ground truthing of the data was done by field observations. Mapped vertical offsets show typical trends of normal fault growth by segment coalescence. However, opening widths in map view show variations at much higher frequency, caused by segmentation, collapsed relays and tilted blocks. These effects commonly cause a higher-than-expected ratio of vertical offset and opening width for a steep normal fault at depth. Based on field observations and the relationships of opening width and vertical offset, we define three endmember morphologies of MDFs: (i) dilatant faults with opening width and vertical offset, (ii) tilted blocks (TBs) and (iii) opening-mode (mode I) fissures. Field observation of normal faults without visible opening invariably shows that these have an opening filled with recent sediment. TB-dominated normal faults tend to have the largest ratio of opening width and vertical offset. Fissures have opening widths up to 15 m with throw below a 2 m threshold. Plotting opening width versus vertical offset shows that there is a continuous transition between the endmembers. We conclude that for these endmembers, the ratio between opening width and vertical offset R can be reliably used to predict fault structures at depth. However, fractures associated with MDFs belong to one larger continuum and, consequently, where different endmembers coexist, a clear identification of structures solely via the determination of R is impossible.
APA, Harvard, Vancouver, ISO, and other styles
7

Dey, Sandip, and Solomon Tesfamariam. "Probabilistic Seismic Risk Analysis of Buried Pipelines Due to Permanent Ground Deformation for Victoria, BC." Geotechnics 2, no. 3 (August 31, 2022): 731–53. http://dx.doi.org/10.3390/geotechnics2030035.

Full text
Abstract:
Buried continuous pipelines are prone to failure due to permanent ground deformation as a result of fault rupture. Since the failure mode is dependent on a number of factors, a probabilistic approach is necessary to correctly compute the seismic risk. In this study, a novel method to estimate regional seismic risk to buried continuous pipelines is presented. The seismic risk assessment method is thereafter illustrated for buried gas pipelines in the City of Victoria, British Columbia. The illustrated example considers seismic hazard from the Leech River Valley Fault Zone (LRVFZ). The risk assessment approach considers uncertainties of earthquake rupture, soil properties at the site concerned, geometric properties of pipes and operating conditions. Major improvements in this method over existing comparable studies include the use of stochastic earthquake source modeling and analytical Okada solutions to generate regional ground deformation, probabilistically. Previous studies used regression equations to define probabilistic ground deformations along a fault. Secondly, in the current study, experimentally evaluated 3D shell and continuum pipe–soil finite element models were used to compute pipeline responses. Earlier investigations used simple soil spring–beam element pipe models to evaluate the pipeline response. Finally, the current approach uses the multi-fidelity Gaussian process surrogate model to ensure efficiency and limit required computational resources. The developed multi-fidelity Gaussian process surrogate model was successfully cross-validated with high coefficients of determination of 0.92 and 0.96. A fragility curve was generated based on failure criteria from ALA strain limits. The seismic risks of pipeline failure due to compressive buckling and tensile rupture at the given site considered were computed to be 1.5 percent and 0.6 percent in 50 years, respectively.
APA, Harvard, Vancouver, ISO, and other styles
8

Olsen-Kettle, Louise, Hans Mühlhaus, and Christian Baillard. "A study of localization limiters and mesh dependency in earthquake rupture." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 368, no. 1910 (January 13, 2010): 119–30. http://dx.doi.org/10.1098/rsta.2009.0190.

Full text
Abstract:
No complete physically consistent model of earthquake rupture exists that can fully describe the rich hierarchy of scale dependencies and nonlinearities associated with earthquakes. We study mesh sensitivity in numerical models of earthquake rupture and demonstrate that this mesh sensitivity may provide hidden clues to the underlying physics generating the rich dynamics associated with earthquake rupture. We focus on unstable slip events that occur in earthquakes when rupture is associated with frictional weakening of the fault. Attempts to simulate these phenomena directly by introducing the relevant constitutive behaviour leads to mesh-dependent results, where the deformation localizes in one element, irrespective of size. Interestingly, earthquake models with oversized mesh elements that are ill-posed in the continuum limit display more complex and realistic physics. Until now, the mesh-dependency problem has been regarded as a red herring—but have we overlooked an important clue arising from the mesh sensitivity? We analyse spatial discretization errors introduced into models with oversized meshes to show how the governing equations may change because of these error terms and give rise to more interesting physics.
APA, Harvard, Vancouver, ISO, and other styles
9

Ahmed, Barzan I., and Mohammed S. Al-Jawad. "Geomechanical modelling and two-way coupling simulation for carbonate gas reservoir." Journal of Petroleum Exploration and Production Technology 10, no. 8 (August 10, 2020): 3619–48. http://dx.doi.org/10.1007/s13202-020-00965-7.

Full text
Abstract:
Abstract Geomechanical modelling and simulation are introduced to accurately determine the combined effects of hydrocarbon production and changes in rock properties due to geomechanical effects. The reservoir geomechanical model is concerned with stress-related issues and rock failure in compression, shear, and tension induced by reservoir pore pressure changes due to reservoir depletion. In this paper, a rock mechanical model is constructed in geomechanical mode, and reservoir geomechanics simulations are run for a carbonate gas reservoir. The study begins with assessment of the data, construction of 1D rock mechanical models along the well trajectory, the generation of a 3D mechanical earth model, and running a 4D geomechanical simulation using a two-way coupling simulation method, followed by results analysis. A dual porosity/permeability model is coupled with a 3D geomechanical model, and iterative two-way coupling simulation is performed to understand the changes in effective stress dynamics with the decrease in reservoir pressure due to production, and therefore to identify the changes in dual-continuum media conductivity to fluid flow and field ultimate recovery. The results of analysis show an observed effect on reservoir flow behaviour of a 4% decrease in gas ultimate recovery and considerable changes in matrix contribution and fracture properties, with the geomechanical effects on the matrix visibly decreasing the gas production potential, and the effect on the natural fracture contribution is limited on gas inflow. Generally, this could be due to slip flow of gas at the media walls of micro-extension fractures, and the flow contribution and fracture conductivity is quite sufficient for the volume that the matrixes feed the fractures. Also, the geomechanical simulation results show the stability of existing faults, emphasizing that the loading on the fault is too low to induce fault slip to create fracturing, and enhanced permeability provides efficient conduit for reservoir fluid flow in reservoirs characterized by natural fractures.
APA, Harvard, Vancouver, ISO, and other styles
10

Compastié, Maxime, Antonio López Martínez, Carolina Fernández, Manuel Gil Pérez, Stylianos Tsarsitalidis, George Xylouris, Izidor Mlakar, Michail Alexandros Kourtis, and Valentino Šafran. "PALANTIR: An NFV-Based Security-as-a-Service Approach for Automating Threat Mitigation." Sensors 23, no. 3 (February 2, 2023): 1658. http://dx.doi.org/10.3390/s23031658.

Full text
Abstract:
Small and medium enterprises are significantly hampered by cyber-threats as they have inherently limited skills and financial capacities to anticipate, prevent, and handle security incidents. The EU-funded PALANTIR project aims at facilitating the outsourcing of the security supervision to external providers to relieve SMEs/MEs from this burden. However, good practices for the operation of SME/ME assets involve avoiding their exposure to external parties, which requires a tightly defined and timely enforced security policy when resources span across the cloud continuum and need interactions. This paper proposes an innovative architecture extending Network Function Virtualisation to externalise and automate threat mitigation and remediation in cloud, edge, and on-premises environments. Our contributions include an ontology for the decision-making process, a Fault-and-Breach-Management-based remediation policy model, a framework conducting remediation actions, and a set of deployment models adapted to the constraints of cloud, edge, and on-premises environment(s). Finally, we also detail an implementation prototype of the framework serving as evaluation material.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Continuum fault models"

1

Zacarias, Alisson Teixeira. "Determinação da variação de rigidez em placas, através da metodologia dos observadores de estados /." Ilha Solteira : [s.n.], 2008. http://hdl.handle.net/11449/94563.

Full text
Abstract:
Orientador: Gilberto Pechoto de Melo
Banca: Vicente Lopes Júnior
Banca: Raquel Santini Leandro Rade
Resumo: Hoje em dia um dos fatores de interesse das indústrias no desenvolvimento de novas técnicas de detecção e localização de falhas é a preocupação com a segurança de seus sistemas, tendo-se a necessidade de supervisão e monitoramento de modo que as falhas sejam detectadas e corrigidas o mais rápido possível. Verifica-se na prática que determinados parâmetros dos sistemas podem variar durante o processo, devido a características específicas ou o desgaste natural de seus componentes. Sabe-se também que, mesmo nos sistemas bem projetados, a ocorrência de trincas em alguns componentes pode provocar perdas econômicas ou conduzir a situações perigosas. Os observadores de estado podem reconstruir os estados não medidos do sistema, desde que os mesmos sejam observáveis, tornando possível, desta forma, estimar as medidas nos pontos de difícil acesso. A técnica dos observadores de estado consiste em desenvolver um modelo para o sistema em análise e comparar a estimativa da saída com a saída medida, a diferença entre os dois sinais presentes resulta em um resíduo que é utilizado para análise. Neste trabalho foi montado um banco de observadores associado a um modelo de trinca de modo a acompanhar o progresso da mesma. Os resultados obtidos através de simulações computacionais em uma placa engastada discretizada pela técnica dos elementos finitos e as análises experimentais realizadas foram bastante satisfatórios, validando a metodologia desenvolvida.
Abstract: Nowadays a main factor of interest in industries in the development of new techniques for detection and localization of faults is the concern with the security of its systems. There is the need of supervising and monitoring the machines to detect and correct the fault as soon as possible. In practice it is verified that some determined parameters of the systems can vary during the process, due to the specific characteristics or the natural wearing of its components. It is known that even in well-designed systems the occurrence of cracks in some components can induce economic losses or lead to dangerous situations. The state observers methodology can reconstruct the unmeasured states of the system, since they are observable, becoming possible in this way to estimate the measures at points of difficult access. The technique of state observers consists of developing a model for the system under analysis and to compare the estimated with the measured exit, and the difference between these two signals results in a residue that is used for analysis. In this work was set up a bank of observers associated to a model of crack in order to follow its progress. The results obtained through computational simulations in a cantilever plate discretized by using the finite elements technique and the accomplished experimental analysis were sufficiently satisfactory, validating the developed methodology.
Mestre
APA, Harvard, Vancouver, ISO, and other styles
2

Wong, Kam Lung. "Studies on a continuous Markov model for probabilistic fault analysis in VLSI sequential circuits using computer simulation." Thesis, University of Ottawa (Canada), 1990. http://hdl.handle.net/10393/5876.

Full text
Abstract:
A continuous parameter Markov model is proposed in this thesis to detect permanent stuck-at faults in sequential circuits by random testing. Given a sequential circuits with certain stuck-at faults specified, the fault-free and the faulty state tables of the circuit can be readily derived. By simulation of these two state tables on a computer, the parameters of the desired Markov model can be obtained. For a specified confidence level, it is easy to derive the model parameters and to estimate the required testing time. A complete mathematical analysis of the model is given that provides some useful insights into the nature of faults in relation to random testing and the associated confidence level.
APA, Harvard, Vancouver, ISO, and other styles
3

Zacarias, Alisson Teixeira [UNESP]. "Determinação da variação de rigidez em placas, através da metodologia dos observadores de estados." Universidade Estadual Paulista (UNESP), 2008. http://hdl.handle.net/11449/94563.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:27:14Z (GMT). No. of bitstreams: 0 Previous issue date: 2008-04-19Bitstream added on 2014-06-13T18:55:45Z : No. of bitstreams: 1 zacarias_at_me_ilha.pdf: 587117 bytes, checksum: 4a907603292d52ff41cf7df3b7718be5 (MD5)
Hoje em dia um dos fatores de interesse das indústrias no desenvolvimento de novas técnicas de detecção e localização de falhas é a preocupação com a segurança de seus sistemas, tendo-se a necessidade de supervisão e monitoramento de modo que as falhas sejam detectadas e corrigidas o mais rápido possível. Verifica-se na prática que determinados parâmetros dos sistemas podem variar durante o processo, devido a características específicas ou o desgaste natural de seus componentes. Sabe-se também que, mesmo nos sistemas bem projetados, a ocorrência de trincas em alguns componentes pode provocar perdas econômicas ou conduzir a situações perigosas. Os observadores de estado podem reconstruir os estados não medidos do sistema, desde que os mesmos sejam observáveis, tornando possível, desta forma, estimar as medidas nos pontos de difícil acesso. A técnica dos observadores de estado consiste em desenvolver um modelo para o sistema em análise e comparar a estimativa da saída com a saída medida, a diferença entre os dois sinais presentes resulta em um resíduo que é utilizado para análise. Neste trabalho foi montado um banco de observadores associado a um modelo de trinca de modo a acompanhar o progresso da mesma. Os resultados obtidos através de simulações computacionais em uma placa engastada discretizada pela técnica dos elementos finitos e as análises experimentais realizadas foram bastante satisfatórios, validando a metodologia desenvolvida.
Nowadays a main factor of interest in industries in the development of new techniques for detection and localization of faults is the concern with the security of its systems. There is the need of supervising and monitoring the machines to detect and correct the fault as soon as possible. In practice it is verified that some determined parameters of the systems can vary during the process, due to the specific characteristics or the natural wearing of its components. It is known that even in well-designed systems the occurrence of cracks in some components can induce economic losses or lead to dangerous situations. The state observers methodology can reconstruct the unmeasured states of the system, since they are observable, becoming possible in this way to estimate the measures at points of difficult access. The technique of state observers consists of developing a model for the system under analysis and to compare the estimated with the measured exit, and the difference between these two signals results in a residue that is used for analysis. In this work was set up a bank of observers associated to a model of crack in order to follow its progress. The results obtained through computational simulations in a cantilever plate discretized by using the finite elements technique and the accomplished experimental analysis were sufficiently satisfactory, validating the developed methodology.
APA, Harvard, Vancouver, ISO, and other styles
4

Ekanayake, R. M. Thushara Chaminda Bandara. "Fault diagnosis: A distributed model-based approach for safety-critical complex reactive systems with hybrid dynamics." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/135714/1/R%20M%20Thushara%20Chaminda%20Bandara_Ekanayake_Thesis.pdf.

Full text
Abstract:
This thesis develop basis to implement integrated control and fault diagnosis systems for complex dynamic safety-critical systems. A distributed systems architecture and its implementation on a real physical elevator system was discussed giving recommendations for commercial applications. A physical system was modelled as a combined model of finite state machines and bond graphs.
APA, Harvard, Vancouver, ISO, and other styles
5

Hadj, Hassen Faouzi. "Modelisation par un milieu continu du comportement mecanique d'un massif rocheux a fissuration orientee." Paris, ENMP, 1988. http://www.theses.fr/1988ENMP0120.

Full text
Abstract:
Dans le but d'optimiser la conception des ouvrages realises dans les massifs rocheux, on prend en compte les discontinuites qui les affectent et on etablit un modele qui etudie le comportement mecanique d'un massif fissure grace a des eprouvettes fracturees
APA, Harvard, Vancouver, ISO, and other styles
6

ABDALLAH, HAISCAM. "Construction d'un logiciel de calcul des elements transitoires de chaines de markov a temps continu." Rennes 1, 1989. http://www.theses.fr/1989REN10055.

Full text
Abstract:
Developpement d'une methode et construction d'un logiciel permettant l'evaluation quantitative des mesures de la surete de fonctionnement des systemes informatiques. La methode proposee realise un calcul precis et rapide des elements transitoires. Des bornes relatives aux differentes erreurs sont mises en evidence. Plus precisement, elles concernent le probleme de precision relatif a la troncature et celui des erreurs d'arrondi. Des valeurs critiques, a partir desquelles les resultats avec un chiffre decimal significatif ne sont plus garantis, sont definies. Cette definition a permis le calcul de l'ensemble de la reponse transitoire par une seule execution du logiciel elabore
APA, Harvard, Vancouver, ISO, and other styles
7

Thabet, Rihab El Houda. "Détection de défauts des systèmes non linéaires à incertitudes bornées continus." Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0283/document.

Full text
Abstract:
La surveillance des systèmes industriels et/ou embarqués constitue une préoccupation majeure en raison de l’accroissement de leur complexité et des exigences sur le respect des profilsde mission. La détection d’anomalies tient une place centrale dans ce contexte. Fondamentalement,les procédures de détection à base de modèles consistent à comparer le fonctionnement réel dusystème avec un fonctionnement de référence établi à l’aide d’un modèle sans défaut. Cependant,les systèmes à surveiller présentent souvent des dynamiques non linéaires et difficiles à caractériserde manière exacte. L’approche retenue dans cette thèse consiste à englober leur influencepar des incertitudes bornées. La propagation de ces incertitudes permet l’évaluation de seuils dedécision visant à assurer le meilleur compromis possible entre sensibilité aux défauts et robustesseaux perturbations tout en préservant une complexité algorithmique raisonnable. Pour cela, unepart importante du travail porte sur l’extension des classes de modèles dynamiques à incertitudesbornées pour lesquels des observateurs intervalles peuvent être obtenus avec les preuves d’inclusionet de stabilité associées. En s’appuyant sur des changements de coordonnées variant dans letemps, des dynamiques LTI, LPV et LTV sont considérées graduellement pour déboucher sur desclasses de dynamiques Non Linéaires à Incertitudes Bornées continues (NL-IB). Une transformationdes modèles NL-IB en modèles LPV-IB a été utilisée. Une première étude sur les non-linéaritésd’une dynamique de vol longitudinal est présentée. Un axe de travail complémentaire porte surune caractérisation explicite de la variabilité (comportement aléatoire) du bruit de mesure dansun contexte à erreurs bornées. En combinant cette approche à base de données avec celle à basede modèle utilisant un prédicteur intervalle, une méthode prometteuse permettant la détection dedéfauts relatifs à la position d’une surface de contrôle d’un avion est proposée. Une étude portenotamment sur la détection du blocage et de l’embarquement d’une gouverne de profondeur
The monitoring of industrial and/or embedded systems is a major concern accordingto their increasing complexity and requirements to respect the mission profiles. Detection of anomaliesplays a key role in this context. Fundamentally, model-based detection procedures consist incomparing the true operation of the system with a reference established using a fault-free model.However, the monitored systems often feature nonlinear dynamics which are difficult to be exactlycharacterized. The approach considered in this thesis is to enclose their influence through boundeduncertainties. The propagation of these uncertainties allows the evaluation of thresholds aimingat ensuring a good trade-off between sensitivity to faults and robustness with respect to disturbanceswhile maintaining a reasonable computational complexity. To that purpose, an importantpart of the work adresses the extension of classes of dynamic models with bounded uncertaintiesso that interval observers can be obtained with the related inclusion and stability proofs. Based ona time-varying change of coordinates, LTI, LPV and LTV dynamics are gradually considered tofinally deal with some classes classes of nonlinear continuous dynamics with bounded uncertainties.A transformation of such nonlinear models into LPV models with bounded uncertainties has beenused. A first study on nonlinearities involved in longitudinal flight dynamics is presented. A complementarywork deals with an explicit characterization of measurement noise variability (randombehavior of noise within measurement) in a bounded error context. Combining this data-drivenapproach with a model-driven one using an interval predictor, a promising method for the detectionof faults related to the position of aircraft control surfaces is proposed. In this context, specialattention has been paid to the detection of runaway and jamming of an elevator
APA, Harvard, Vancouver, ISO, and other styles
8

Kibey, Sandeep A. "Mesoscale models for stacking faults, deformation twins and martensitic transformations : linking atomistics to continuum. /." 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3290271.

Full text
Abstract:
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2007.
Source: Dissertation Abstracts International, Volume: 68-11, Section: B, page: 7621. Adviser: Huseyin Sehitoglu. Includes bibliographical references (leaves 117-130) Available on microfilm from Pro Quest Information and Learning.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Continuum fault models"

1

United States. National Aeronautics and Space Administration., ed. Fault diagnosis based on continuous simulation models: Final report, NASA/Langley grant NAG-1-618. Williamsburg, Va: College of William & Mary, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Continuum fatigue damage modeling for critical design, control, and fault prognosis. [Washington, DC]: National Aeronautics and Space Administration, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cohn, Stephan, and P. Allan Klock. Operating Room Fires and Electrical Safety. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780199366149.003.0018.

Full text
Abstract:
Understanding electrical systems and fire safety protocols in the operating room is fundamental to patient and staff safety. Modern operating rooms are designed to reduce the risk of electrical hazards. Line isolation transformers were developed in the era of explosive anesthetics to reduce the risk of sparks and macro-shock. Isolated electrical supplies are still used in operating rooms because they allow surgery to continue while the line isolation alarm is activated and the source of the fault is investigated and deactivated. Ground fault circuit breaker interrupters may also be used in operating rooms, but if a fault is detected, they will deactivate the electrical circuit, which may be disruptive to surgical or anesthetic care. Micro-shock occurs when a small amount of current is delivered directly to the myocardium via an indwelling catheter or pacing wire. Operating room fires, though relatively rare, can cause devastating patient injury but are largely preventable.
APA, Harvard, Vancouver, ISO, and other styles
4

Farrell, David M., and Niamh Hardiman, eds. The Oxford Handbook of Irish Politics. Oxford University Press, 2021. http://dx.doi.org/10.1093/oxfordhb/9780198823834.001.0001.

Full text
Abstract:
Ireland has enjoyed continuous democratic government for almost a century, an unusual experience among countries that gained their independence in the twentieth century. But the way this works has changed dramatically over time. Ireland’s colonial past has had an enduring influence over political life, enabling stable institutions of democratic accountability, while also shaping economic underdevelopment and persistent emigration. More recently, membership of the EU has brought about far-reaching transformation across almost all aspects of life. But the paradoxes have only intensified. Now one of the most open economies in the world, Ireland has experienced both rapid growth and a severe crash in the wake of the Great Recession. By some measures, Ireland is among the most affluent countries in the world, yet this is not the lived experience for many of its citizens. Ireland is an unequivocally modern state, yet public life continues to be marked by ideas and values in which tradition and modernity are uneasy bedfellows. It is a small state that has ambitions to carry more weight on the world stage. Ireland continues to be deeply connected to Britain through ties of culture and trade, now matters of deep concern post-Brexit. And the old fault lines between North and South, between Ireland and Britain, which had been at the core of one of Europe’s longest and bloodiest civil conflicts, risk being reopened. These key issues are teased out in this book, making it the most comprehensive volume on Irish politics to date.
APA, Harvard, Vancouver, ISO, and other styles
5

Lockwood, Erin. The Politics and Practices of Central Clearing in OTC Derivatives Markets. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190864576.003.0007.

Full text
Abstract:
This chapter focuses on the unintended consequences of the post-crisis mandate that over-the-counter (OTC) derivatives be cleared through centralized clearinghouses in an effort to reduce counterparty and systemic risk. Although central clearing has been widely implemented, it has reproduced many of the same characteristics of financial markets that contributed to the 2008 crisis: concentrated risk, moral hazard, and a reliance on faulty risk models. What accounts for the recalcitrance of the OTC derivatives market to a regulatory change? The chapter argues that focusing on the technologies and practices used to govern derivatives markets helps explain the absence of more radical regulatory policy shifts in derivatives regulation. Although there has been a significant shift in who regulates OTC markets, much less has changed at the level of the specific practices that govern these markets, and the chapter examines the continued reliance on netting, collateralization, and risk modeling within clearinghouses.
APA, Harvard, Vancouver, ISO, and other styles
6

Fitzsimmons, Rebekah, and Casey Alane Wilson, eds. Beyond the Blockbusters. University Press of Mississippi, 2020. http://dx.doi.org/10.14325/mississippi/9781496827135.001.0001.

Full text
Abstract:
While the critical and popular attention afforded to twenty-first century young adult literature has exponentially increased in recent years, the texts selected for discussion in both classrooms and scholarship has remained static and small. Twilight, The Hunger Games, The Fault in Our Stars, and The Hate U Give dominate conversations among scholars and critics—but they are far from the only texts in need of analysis. Beyond the Blockbusters: Themes and Trends in Contemporary Young Adult Fiction offers a necessary remedy to this limited perspective by bringing together a series of essays about the many subgenres, themes, and character types that have been overlooked and under-discussed until now. The collection tackles a diverse range of subjects—modern updates to the marriage plot; fairy tale retellings in dystopian settings; stories of extrajudicial police killings and racial justice—but is united by a commitment to exploring the large-scale generic and theoretical structures at work in each set of texts. As a collection, Beyond the Blockbusters is an exciting glimpse of a field that continues to grow and change even as it explodes with popularity, and would make an excellent addition to the library of any scholar, instructor, or reader of young adult literature.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Continuum fault models"

1

Zhu, Yucai, and Ton Backx. "Identification for Fault Diagnosis; Estimation of Continuous-Time Models." In Identification of Multivariable Industrial Processes, 167–75. London: Springer London, 1993. http://dx.doi.org/10.1007/978-1-4471-2058-2_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Jing, Jinglin Zhou, and Xiaolu Chen. "Soft-Transition Sub-PCA Monitoring of Batch Processes." In Intelligent Control and Learning Systems, 59–77. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-8044-1_5.

Full text
Abstract:
AbstractBatch or semi-batch processes have been utilized to produce high-value-added products in the biological, food, semi-conductor industries. Batch process, such as fermentation, polymerization, and pharmacy, is highly sensitive to the abnormal changes in operating condition. Monitoring of such processes is extremely important in order to get higher productivity. However, it is more difficult to develop an exact monitoring model of batch processes than that of continuous processes, due to the common natures of batch process: non-steady, time-varying, finite duration, and nonlinear behaviors. The lack of exact monitoring model in most batch processes leads that an operator cannot identify the faults when they occurred. Therefore, effective techniques for monitoring batch process exactly are necessary in order to remind the operator to take some corrective actions before the situation becomes more dangerous.
APA, Harvard, Vancouver, ISO, and other styles
3

Mohan Khilar, Pabitra. "Genetic Algorithms." In Advances in Secure Computing, Internet Services, and Applications, 239–55. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-4940-8.ch012.

Full text
Abstract:
Genetic Algorithms are important techniques to solve many NP-Complete problems related to distributed computing and its application domains. Genetic algorithm-based fault diagnoses in distributed computing systems have been a feasible methodology to solve diagnosis problems recently. Distributed embedded systems consisting of sensors, actuators, processors/microcontrollers, and interconnection networks are one class of distributed computing systems that have long been used, staring from small-scale home appliances to large-scale satellite systems. Some of their applications are in safety-critical systems where occurrence of faults can result in catastrophic situations for which fault diagnosis in such systems are very important. In this chapter, different types of faults, which are likely to occur in distributed embedded systems and a GA-based methodology to solve these problems along with the performance analysis of fault diagnosis algorithm have been presented. Nevertheless, the diagnosis algorithm presented here is well suitable for general purpose distributed computing systems with appropriate modification over system and fault model. In fact, this book chapter will enable the reader not only to study various aspects of fault diagnosis techniques but will also provide insight to build robust systems to allow for continued normal service despite the occurrence of failures.
APA, Harvard, Vancouver, ISO, and other styles
4

Pan, Jihui, Wenqing Qu, Hao Xue, Lei Zhang, and Liang Wu. "Study on Fault Prognostics and Health Management for UAV." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2022. http://dx.doi.org/10.3233/faia220569.

Full text
Abstract:
With the improvement of automation and intelligence degree of unmanned aerial vehicle, its application scenarios and service scope continue to expand. The UAV system is complex and the task environment is changeable, which poses new challenges to its safety and reliability. Fault prediction and Health management (PHM) technology can effectively reduce the risk of mission interruption caused by faults, and improve the quality of UAV mission throughout its life cycle. Firstly, the framework of UAV PHM technology is proposed based on the basic concepts of UAV and PHM technology, and then the research status of UAV fault diagnosis and fault prediction technology is analyzed and summarized. Finally, the challenges of UAV fault diagnosis and prediction technology are discussed. In addition, the development trend of UAV PHM technology is summarized from four aspects: failure mechanism basis, condition monitoring technology, fault model construction and intelligent technology application, aiming to provide certain reference for the research and development of the new generation of UAV PHM technology.
APA, Harvard, Vancouver, ISO, and other styles
5

Lumme, Veli. "Principles of Classification." In Diagnostics and Prognostics of Engineering Systems, 55–73. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2095-7.ch003.

Full text
Abstract:
This chapter discusses the main principles of the creation and use of a classifier in order to predict the interpretation of an unknown data sample. Classification offers the possibility to learn and use learned information received from previous occurrences of various normal and fault modes. This process is continuous and can be generalized to cover the diagnostics of all objects that are substantially of the same type. The effective use of a classifier includes initial training with known data samples, anomaly detection, retraining, and fault detection. With these elements an automated, a continuous learning machine diagnostics system can be developed. The main objective of such a system is to automate various time intensive tasks and allow more time for an expert to interpret unknown anomalies. A secondary objective is to utilize the data collected from previous fault modes to predict the re-occurrence of these faults in a substantially similar machine. It is important to understand the behaviour and functioning of a classifier in the development of software solutions for automated diagnostic methods. Several proven methods that can be used, for instance in software development, are disclosed in this chapter.
APA, Harvard, Vancouver, ISO, and other styles
6

Akinci, Tahir Cetin. "Applications of Big Data and AI in Electric Power Systems Engineering." In AI and Big Data’s Potential for Disruptive Innovation, 240–60. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-5225-9687-5.ch009.

Full text
Abstract:
The production, transmission, and distribution of energy can only be made stable and continuous by detailed analysis of the data. The energy demand needs to be met by a number of optimization algorithms during the distribution of the generated energy. The pricing of the energy supplied to the users and the change for investments according to the demand hours led to the formation of energy exchanges. This use costs varies for active or reactive powers. All of these supply-demand and pricing plans can only be achieved by collecting and analyzing data at each stage. In the study, an electrical power line with real parameters was modeled and fault scenarios were created, and faults were determined by artificial intelligence methods. In this study, both the power flow of electrical power systems and the methods of meeting the demands were investigated with big data, machine learning, and artificial neural network approaches.
APA, Harvard, Vancouver, ISO, and other styles
7

Akinci, Tahir Cetin. "Applications of Big Data and AI in Electric Power Systems Engineering." In Research Anthology on Artificial Neural Network Applications, 783–803. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-2408-7.ch036.

Full text
Abstract:
The production, transmission, and distribution of energy can only be made stable and continuous by detailed analysis of the data. The energy demand needs to be met by a number of optimization algorithms during the distribution of the generated energy. The pricing of the energy supplied to the users and the change for investments according to the demand hours led to the formation of energy exchanges. This use costs varies for active or reactive powers. All of these supply-demand and pricing plans can only be achieved by collecting and analyzing data at each stage. In the study, an electrical power line with real parameters was modeled and fault scenarios were created, and faults were determined by artificial intelligence methods. In this study, both the power flow of electrical power systems and the methods of meeting the demands were investigated with big data, machine learning, and artificial neural network approaches.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Ying, Tuo Wang, Dongqiang Shi, Yizhu Tao, and Li Feng. "A System with Minimal Redundancy for Intelligent Prediction Management." In Advances in Transdisciplinary Engineering. IOS Press, 2022. http://dx.doi.org/10.3233/atde220515.

Full text
Abstract:
This paper proposes a predictive redundancy method for management, which is used for intelligent prediction management, which can effectively improve the real-time online reliability of agents. This method changes the multi-redundant post-processing scheme of traditional system and adopts single-redundant real-time pre-switching. Through a large number of experiments, the feasibility of the system is verified, and the system can be built as a redundant fault-tolerant system. The redundant fault-tolerant system has good generalization and can be implemented in the scene. A redundant module that is about to fail is used, including: the proposed redundant module that continues to decline in health, a redundant module that rapidly decreases in health at a certain time, and a redundant module that is always in a sub-healthy state. After the health assessment, you can enter the early warning model and observe whether the early warning model can issue a health warning, which can verify the effectiveness of the early warning model.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, P., and S. X. Ding. "A Model-Free Fault Detection Approach of Continuous-Time Systems from Time Domain Data." In Fault Detection, Supervision and Safety of Technical Processes 2006, 546–51. Elsevier, 2007. http://dx.doi.org/10.1016/b978-008044485-7/50092-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Fuxing, Luxi Li, and You Peng. "Research on Digital Twin and Collaborative Cloud and Edge Computing Applied in Operations and Maintenance in Wind Turbines of Wind Power Farm." In Advances in Transdisciplinary Engineering. IOS Press, 2021. http://dx.doi.org/10.3233/atde210263.

Full text
Abstract:
For the increasingly prominent problems of wind turbine maintenance, using edge cloud collaboration technology to construct wind farm equipment operation and maintenance framework is proposed, digital twin is used for fault prediction and diagnosis. Framework consists of data source layer, edge computing node layer, public or private cloud. Data source layer solves acquisition and transmission of wind turbine operation and maintenance data, edge computing node layer is responsible for on-site data cloud computing, storage and data transmission to cloud computing layer, receiving cloud computing results, device driving and control. The cloud computing layer completes the big data calculation and storage from wind farm, except that, based on real-time data records, continuous simulation and optimization, correct failure prediction mode, expert database and its prediction software, and edge node interaction and shared intelligence. The research explains that wind turbine uses digital twin to do fault prediction and diagnosis model, condition assessment, feature analysis and diagnosis, life prediction, combining with the probabilistic digital twin model to make the maintenance plan and decision-making method.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Continuum fault models"

1

Gangsar, Purushottam, and Rajiv Tiwari. "Analysis of Time, Frequency and Wavelet Based Features of Vibration and Current Signals for Fault Diagnosis of Induction Motors Using SVM." In ASME 2017 Gas Turbine India Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/gtindia2017-4774.

Full text
Abstract:
This paper presents a comparative analysis of the time, frequency and time-frequency domain based features of the vibration and current signals for identifying various faults in induction motors (IMs) using support vector machine (SVM). Four mechanical faults (bearing fault, unbalanced rotor, bowed rotor and misaligned rotor), and three electrical faults (broken rotor bars, stator winding fault with two severity levels and phase unbalance with two severity levels) are considered in the present study. The proposed fault diagnosis consists of three steps. In the first step, the vibration in three orthogonal directions and the current in three phases are acquired from the healthy and faulty motors using a machine fault simulator (MFS). In second step, useful statistical features are extracted from the time, frequency and time-frequency domain (continuous wavelet transform (CWT)) of the signal. For the effective fault diagnosis, SVM parameters are optimally selected based on the grid-search method along with 5-fold cross-validation, and the effective fault features are selected based on the wrapper model. Finally, the fault diagnosis of IM is performed using optimal SVM parameters and effective features as input to the SVM. The classification performance of all methodologies developed in three domains is compared for various operating conditions of IMs. The test results showed that the developed methodology could isolate ten IM fault conditions successfully based on features from all three domains at all IM operating conditions; however, time-frequency features give the best results.
APA, Harvard, Vancouver, ISO, and other styles
2

Mohammadi, Rasul, Shahin Hashtrudi-Zad, and Khashayar Khorasani. "Hybrid Fault Diagnosis: Application to a Gas Turbine Engine." In ASME Turbo Expo 2009: Power for Land, Sea, and Air. ASMEDC, 2009. http://dx.doi.org/10.1115/gt2009-60075.

Full text
Abstract:
This paper presents a hybrid framework for fault diagnosis of complex systems that are modeled by hybrid automata. A bank of residual generators is constructed based on the continuous models of the system. Each residual generator is modeled by a discrete-event system (DES). Next, the DES models of the residual generators and the DES model of the hybrid plant are combined to build an “extended DES” model. A hybrid diagnoser is constructed based on the extended DES model. The hybrid diagnoser effectively combines the readings of discrete sensors and the information supplied by the residual generators (which is based on continuous sensors) to determine the health status of the hybrid plant. The hybrid diagnosis approach is employed to investigate faults in the fuel supply system and the nozzle actuator of a single-spool turbojet engine with an afterburner.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhou, Kai, and J. Tang. "Fuzzy Classification of Gear Fault Using Principal Component Analysis-Based Fuzzy Neural Network." In 2020 International Symposium on Flexible Automation. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/isfa2020-9632.

Full text
Abstract:
Abstract Condition assessment of machinery components such as gears is important to maintain their normal operations and thus can bring benefit to their life circle management. Data-driven approaches haven been a promising way for such gear condition monitoring and fault diagnosis. In practical situation, gears generally have a variety of fault types, some of which exhibit continuous severities of fault. Vibration data collected oftentimes are limited to reflect all possible fault types. Therefore, there is practical need to utilize the data with a few discrete fault severities in training and then infer fault severities for the general scenario. To achieve this, we develop a fuzzy neural network (FNN) model to classify the continuous severities of gear faults based on the experimental measurement. Principal component analysis (PCA) is integrated with the FNN model to capture the main features of the time-series vibration signals with dimensional reduction for the sake of computational efficiency. Systematic case studies are carried out to validate the effectiveness of proposed methodology.
APA, Harvard, Vancouver, ISO, and other styles
4

Munir, Muhammad Ibrahim, Sajid Hussain, Ali Al-Alili, Reem Al Ameri, and Ehab El-Sadaany. "Fault Detection and Classification in Smart Grids Using Wavelet Analysis." In ASME 2020 14th International Conference on Energy Sustainability. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/es2020-1641.

Full text
Abstract:
Abstract One of the core features of the smart grid deemed essential for smooth grid operation is the detection and diagnosis of system failures. For a utility transmission grid system, these failures could manifest in the form of short circuit faults and open circuit faults. Due to the advent of the digital age, the traditional grid has also undergone a massive transition to digital equipment and modern sensors which are capable of generating large volumes of data. The challenge is to preprocess this data such that it can be utilized for the detection of transients and grid failures. This paper presents the incorporation of artificial intelligence techniques such as Support Vector Machine (SVM) and K-Nearest Neighbors (KNN) to detect and comprehensively classify the most common fault transients within a reasonable range of accuracy. For gauging the effectiveness of the proposed scheme, a thorough evaluation study is conducted on a modified IEEE-39 bus system. Bus voltage and line current measurements are taken for a range of fault scenarios which result in high-frequency transient signals. These signals are analyzed using continuous wavelet transform (CWT). The measured signals are afterward preprocessed using Discrete Wavelet Transform (DWT) employing Daubechies four (Db4) mother wavelet in order to decompose the high-frequency components of the faulty signals. DWT results in a range of high and low-frequency detail and approximate coefficients, from which a range of statistical features are extracted and used as inputs for training and testing the classification algorithms. The results demonstrate that the trained models can be successfully employed to detect and classify faults on the transmission system with acceptable accuracy.
APA, Harvard, Vancouver, ISO, and other styles
5

Etemaddar, Mahmoud, Elaheh Vahidian, and Otto Skjåstad. "Fatigue Damage to the Spar-Type Offshore Floating Wind Turbine Under Blade Pitch Controller Faults." In ASME 2014 33rd International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/omae2014-23235.

Full text
Abstract:
The safety and reliability margin of offshore floating wind turbines need to be higher than that of onshore wind turbines due to larger environmental loads and higher operational and maintenance costs for offshore wind turbines compared to onshore wind turbines. However rotor cyclic loads coupled with 6 DOFs motions of the substructure, amplifies the fatigue damage in offshore floating wind turbines. In general a lower fatigue design factor is used for offshore wind turbines compared to that of the stationary oil and gas platforms. This is because the consequence of a failure in offshore wind turbines in general is lower than that of the offshore oil and gas platforms. In offshore floating wind turbines a sub-system fault in the electrical system and blade pitch angle controller also induces additional fatigue loading on the wind turbine structure. In this paper effect of selected controller system faults on the fatigue damage of an offshore floating wind turbine is investigated, in a case which fault is not detected by a fault detection system due to a failure in the fault detection system or operator decided to continue operation under fault condition. Two fault cases in the blade pitch angle controller of the NREL 5MW offshore floating wind turbine are modeled and simulated. These faults include: bias error in the blade pitch angle rotary encoder and valve blockage or line disconnection in the blade pitch angle actuator. The short-term fatigue damage due to these faults on the composite blade root, steel low-speed shaft, tower bottom and hub are calculated and compared with the fatigue damage under normal operational conditions considering same environmental conditions for both cases. This comparison shows that how risky is to work under the fault conditions which could be useful for wind turbine operators. The servo-hydro-aeroelastic code HAWC2 is used to simulate the time domain responses of the spar-type offshore floating wind turbine under normal and faulty operational conditions. The rain-flow cycle counting method is used to calculate the load cycles under normal operational and fault conditions. The short term fatigue damage to the composite blade root and steel structures are calculated for 6-hour reference period. The bi-linear Goodman diagram and a linear SN curve are used to estimate the fatigue damage to the composite blade root and the steel structures respectively. Moreover the fatigue damage for different mean wind speeds, sea states and fault amplitudes are calculated to figure out the region of wind speeds operation with the highest risk of damage.
APA, Harvard, Vancouver, ISO, and other styles
6

Thebian, Lama, Salah Sadek, Shadi Najjar, and Mounir Mabsout. "Finite Element Analysis of Offshore Pipelines Overlying Active Reverse Fault Rupture." In ASME 2017 36th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/omae2017-61496.

Full text
Abstract:
The paper investigates the behavior of buried offshore pipelines overlying active reverse faults using the finite element (FE) tool Abaqus. In the FE analyses, the pipeline is modeled with 3D shell elements and the soil with continuum elements. Equivalent boundary conditions are imposed at both pipeline ends to account for the elastic response in the far-field away from the fault. Nonlinear materials, nonlinear interactions, and nonlinear geometries are adopted.The Mohr-coulomb constitutive model with strain-softening is used to model the soil behavior and true stress-strain properties are incorporated to model the response of the pipeline steel material. The effects of soil properties, pipeline diameter, diameter to thickness ratio, and fault displacement are investigated. The results focused on analyzing the pipeline deformations, the axial strains, and the buckling behavior of the pipeline with increasing vertical bedrock displacement. Critical compressive strains are calculated and compared with the DNV code provisions.
APA, Harvard, Vancouver, ISO, and other styles
7

Yan, Xuhua, Rosemary Norman, and Mohammed A. Elgendy. "Investigations Into Tidal Current Turbine System Faults and Fault Tolerant Control Strategies." In ASME 2020 39th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/omae2020-18221.

Full text
Abstract:
Abstract In recent years, there has been a growing interest in tidal current energy as it is a potential source for green electricity generation and the most predictable form of ocean renewable energy. Due to the harsh marine environment, the Tidal Current Turbine (TCT) system has to be designed to be robust and to work reliably with high availability to minimize the need for intervention. Thus, fault tolerant control strategies are needed to enable the system to continue operating under some fault conditions, this will reduce the power generation cost and also increase the system robustness. This paper introduces some of the different fault conditions that may occur in TCT systems such as sensor faults, especially tidal current sensors. Potential solutions for these faults are then introduced. The paper then presents a standalone TCT generation system model with perturb and observe (P&O) control; this control aims to solve the tidal current speed sensor fault problem, ensuring that the system operates near the maximum power point (MPP) without the tidal current speed sensor. The control system is simulated using MATLAB/Simulink, for a TCT, utilizing a permanent synchronous generator (PMSG) and a boost converter.
APA, Harvard, Vancouver, ISO, and other styles
8

Yu, LiJie, Dan Cleary, Mark Osborn, and Vrinda Rajiv. "Information Fusion Strategy for Aircraft Engine Health Management." In ASME Turbo Expo 2007: Power for Land, Sea, and Air. ASMEDC, 2007. http://dx.doi.org/10.1115/gt2007-27174.

Full text
Abstract:
Modern aircraft engines are equipped with sophisticated sensing instruments to enable proactive condition monitoring and effective health management capability. Development of intelligent systems that efficiently process sensor and operational data both onboard and off-board, to provide maintenance personnel with timely decision support, is the key to minimize flight service disruption and reduce engine ownership cost. The goal of this research is to develop a practical approach and strategy to leverage various available information sources and modeling techniques to streamline the engine health management process and maximize system accuracy and efficiency. This paper demonstrates a flexible fusion architecture that encapsulates the key elements of the engine monitoring and diagnostic process, i.e., sensor trend analysis module for anomaly detection, feature selection and fault isolation module for root cause identification, a decision module for diagnostic model fusion and action determination, and finally, a feedback module for knowledge validation and continuous learning. At the core of this engine health management system is a diagnostic fusion model designed around a common fault hierarchy which captures both a priori probabilities and interactions among various engine faults isolated by different classification models. The fusion model will resolve conflicting assessments from individual diagnostic models and provide a more accurate and comprehensive engine state estimate.
APA, Harvard, Vancouver, ISO, and other styles
9

Odina, Lanre, and Roger Tan. "Seismic Fault Displacement of Buried Pipelines Using Continuum Finite Element Methods." In ASME 2009 28th International Conference on Ocean, Offshore and Arctic Engineering. ASMEDC, 2009. http://dx.doi.org/10.1115/omae2009-79739.

Full text
Abstract:
In deep waters, pipelines are usually installed exposed on the seabed, as burial is generally not required to ensure on-bottom stability. These exposed pipelines are nevertheless susceptible to seismic geohazards like slope instability at scarp crossings, soil liquefaction and fault movements which may result in failure events, although larger diameter pipelines are generally known to have good tolerances to ground deformation phenomena, provided the seismic magnitudes are not too onerous. Regardless of the pipeline size, these seismic geohazard issues are usually addressed during the design stage by routing the pipeline to avoid such hazardous conditions, where possible. However, extreme environmental conditions like hurricanes or tropical cyclones, which are typically experienced in the Gulf of Mexico and Asia-Pacific regions, are also factors which can cause exposed pipelines to be susceptible to large pipeline displacements and damage. Secondary stabilisation in the form of rock dump is sometimes employed to reduce the hydrodynamic loads from high turbidity currents acting on the pipeline. However, rock dumping (or burying the displaced pipeline) on a fault line could again pose a threat to its integrity following a seismic faulting event. The traditional method of assessment of a buried pipeline subjected to seismic faulting is initially carried out using analytical methods. Due to the limitations of these techniques for large deformation soil movement associated with fault displacement, non-linear finite element (FE) methods are widely used to assess the pipeline integrity. The FE analysis typically idealises the pipeline using discrete structural beam-type elements and the pipeline-soil interaction as discrete non-linear springs, based on the concept of subgrade reactions proposed by Winkler. Recent research from offshore pipeline design activities in the arctic environment for ice gouge events have however suggested that the use of the discrete Winkler element model leads to over-conservative results in comparison to the coupled continuum model. The principal reason for the conservatism is related to the poor modeling of realistic surrounding soil behaviour for large deformation events. This paper discusses the application of continuum FE methods to model the fully coupled seabed-buried pipeline interaction events subject to ground movements at active seismic faults. Using the continuum approach, a more realistic mechanical response of the pipeline is established and can be further utilised to confirm that calculated strains are within allowable limits.
APA, Harvard, Vancouver, ISO, and other styles
10

Syed, Bilal Saeed, Jagannath Mukherjee, Tadeo Ditia, Abdulla Seliem, Abdulla Saad Al Kobaisi, Ben Andrews, Alejandro Jaramillo, and Phil Norlund. "Interpreting Subtle Faults and Multiple Horizons Layers Using Machine Learning and Data Driven Approach." In ADIPEC. SPE, 2022. http://dx.doi.org/10.2118/211707-ms.

Full text
Abstract:
Abstract The Lower Cretaceous Thamama Group is one of the proven and productive hydrocarbons bearing intervals within Abu Dhabi area. The geological setting of Thamama Group is complex due to the nature of its structural evolution in a strike-slip regime with high-angle fault and subtle vertical displacement. Additionally, the stratigraphy of Thamama Group consists of multiple stacks of Carbonate reservoirs which are separated by various Anhydrite layers. Some of the carbonate layer cannot be interpreted properly due to data resolution and quality of the seismic reflector. Due to the geological complexities, using the existing 3D Seismic Data to interpret faults and horizons within due time frame for critical decision making can be extremely challenging. Vertical displacement of the fault often cannot be easily recognized from seismic section and auto-tracking horizon interpretation doesn't produce a good result due to lack of reflector continuity. In the specific area of this study, the main geological challenge was to interpret those N45W subtle faults developed at the central region of the seismic volume. To address this challenge, we first applied one of the existing machine-learning models proposed by Jiang and Norlund (2020) to pre-train a multi-channel convolutional neural network model with a set of synthetic seismic volumes (over 200 different cases), that resembles multiple geological scenarios with similar structural characteristics to those observed in the study area. We utilize a new point-based method that leverages a network analysis technique to automatically extract fault surfaces from fault imaging volumes. For horizon interpretation we use a new Assisted Horizon Interpretation workflow we are developing. This approach is good for identifying and tracking larger, more continuous surfaces throughout a seismic volume. This results in many output horizons that fill the entire seismic cube, we refer to this as “Dense Horizon” extraction. This portion of the project uses a deterministic approach for identifying & tracking patches throughout the entire seismic volume & then uses several automated and manual tools to join those patches to generate complete surfaces. The results achieved with this study helped to improve the quality of faults and horizon generation in complex geological area with improved efficiency and accuracy.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Continuum fault models"

1

Wozniakowska, P., D. W. Eaton, C. Deblonde, A. Mort, and O. H. Ardakani. Identification of regional structural corridors in the Montney play using trend surface analysis combined with geophysical imaging, British Columbia and Alberta. Natural Resources Canada/CMSS/Information Management, 2021. http://dx.doi.org/10.4095/328850.

Full text
Abstract:
The Western Canada Sedimentary Basin (WCSB) is a mature oil and gas basin with an extraordinary endowment of publicly accessible data. It contains structural elements of varying age, expressed as folding, faulting, and fracturing, which provide a record of tectonic activity during basin evolution. Knowledge of the structural architecture of the basin is crucial to understand its tectonic evolution; it also provides essential input for a range of geoscientific studies, including hydrogeology, geomechanics, and seismic risk analysis. This study focuses on an area defined by the subsurface extent of the Triassic Montney Formation, a region of the WCSB straddling the border between Alberta and British Columbia, and covering an area of approximately 130,000 km2. In terms of regional structural elements, this area is roughly bisected by the east-west trending Dawson Creek Graben Complex (DCGC), which initially formed in the Late Carboniferous, and is bordered to the southwest by the Late Cretaceous - Paleocene Rocky Mountain thrust and fold belt (TFB). The structural geology of this region has been extensively studied, but structural elements compiled from previous studies exhibit inconsistencies arising from distinct subregions of investigation in previous studies, differences in the interpreted locations of faults, and inconsistent terminology. Moreover, in cases where faults are mapped based on unpublished proprietary data, many existing interpretations suffer from a lack of reproducibility. In this study, publicly accessible data - formation tops derived from well logs, LITHOPROBE seismic profiles and regional potential-field grids, are used to delineate regional structural elements. Where seismic profiles cross key structural features, these features are generally expressed as multi-stranded or en echelon faults and structurally-linked folds, rather than discrete faults. Furthermore, even in areas of relatively tight well control, individual fault structures cannot be discerned in a robust manner, because the spatial sampling is insufficient to resolve fault strands. We have therefore adopted a structural-corridor approach, where structural corridors are defined as laterally continuous trends, identified using geological trend surface analysis supported by geophysical data, that contain co-genetic faults and folds. Such structural trends have been documented in laboratory models of basement-involved faults and some types of structural corridors have been described as flower structures. The distinction between discrete faults and structural corridors is particularly important for induced seismicity risk analysis, as the hazard posed by a single large structure differs from the hazard presented by a corridor of smaller pre-existing faults. We have implemented a workflow that uses trend surface analysis based on formation tops, with extensive quality control, combined with validation using available geophysical data. Seven formations are considered, from the Late Cretaceous Basal Fish Scale Zone (BFSZ) to the Wabamun Group. This approach helped to resolve the problem of limited spatial extent of available seismic data and provided a broader spatial coverage, enabling the investigation of structural trends throughout the entirety of the Montney play. In total, we identified 34 major structural corridors and number of smaller-scale structures, for which a GIS shapefile is included as a digital supplement to facilitate use of these features in other studies. Our study also outlines two buried regional foreland lobes of the Rocky Mountain TFB, both north and south of the DCGC.
APA, Harvard, Vancouver, ISO, and other styles
2

Hayward, N., and S. Paradis. Geophysical reassessment of the role of ancient lineaments on the development of the western margin of Laurentia and its sediment-hosted Zn-Pb deposits, Yukon and Northwest Territories. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/330038.

Full text
Abstract:
The role of crustal lineaments in the development of the western margin of Laurentia, Selwyn basin and associated sediment-hosted Zn-Pb deposits (clastic-dominated, Mississippi-Valley-type) in Yukon and NWT, are reassessed through a new 3-D inversion strategy applied to new compilations of gravity and magnetic data. Regionally continuous, broadly NE-trending crustal lineaments including the Liard line, Fort Norman structure, and Leith Ridge fault, were interpreted as having had long-standing influence on craton, margin, and sedimentary basin development. However, multiple tectonic overprints including terrane accretion, thrust faulting, and plutonism obscure the region's history. The Liard line, related to a transfer fault that bounds the Macdonald Platform promontory, is refined from the integration of the new geophysical models with published geological data. The geophysical models support the continuity of the Fort Norman structure below the Selwyn basin, but the presence of Leith Ridge fault is not supported in this area. The ENE-trending Mackenzie River lineament, traced from the Misty Creek Embayment to Great Bear Lake, is interpreted to mark the southern edge of a cratonic promontory. The North American craton is bounded by a NW-trending lineament interpreted as a crustal manifestation of lithospheric thinning of the Laurentian margin, as echoed by a change in the depth of the lithosphere-asthenosphere boundary. The structure is straddled by Mississippi Valley-type Zn-Pb occurrences, following their palinspastic restoration, and also defines the eastern limit of mid-Late Cretaceous granitic intrusions. Another NW-trending lineament, interpreted to be associated with a shallowing of lower crustal rocks, is coincident with clastic-dominated Zn-Pb occurrences.
APA, Harvard, Vancouver, ISO, and other styles
3

Rousseau, Henri-Paul. Gutenberg, L’université et le défi numérique. CIRANO, December 2022. http://dx.doi.org/10.54932/wodt6646.

Full text
Abstract:
Introduction u cours des deux derniers millénaires, il y a eu plusieurs façons de conserver, transmettre et même créer la connaissance ; la tradition orale, l’écrit manuscrit, l’écrit imprimé et l’écrit numérisé. La tradition orale et le manuscrit ont dominé pendant plus de 1400 ans, et ce, jusqu’à l’apparition du livre imprimé en 1451, résultant de l’invention mécanique de Gutenberg. Il faudra attendre un peu plus de 550 ans, avant que l’invention du support électronique déloge à son tour le livre imprimé, prenant une ampleur sans précédent grâce à la révolution numérique contemporaine, résultat du maillage des technologies de l’informatique, de la robotique et de la science des données. Les premières universités qui sont nées en Occident, au Moyen Âge, ont développé cette tradition orale de la connaissance tout en multipliant l’usage du manuscrit créant ainsi de véritables communautés de maîtres et d’étudiants ; la venue de l’imprimerie permettra la multiplication des universités où l’oral et l’écrit continueront de jouer un rôle déterminant dans la création et la transmission des connaissances même si le « support » a évolué du manuscrit à l’imprimé puis vers le numérique. Au cours de toutes ces années, le modèle de l’université s’est raffiné et perfectionné sur une trajectoire somme toute assez linéaire en élargissant son rôle dans l’éducation à celui-ci de la recherche et de l’innovation, en multipliant les disciplines offertes et les clientèles desservies. L’université de chaque ville universitaire est devenue une institution florissante et indispensable à son rayonnement international, à un point tel que l’on mesure souvent sa contribution par la taille de sa clientèle étudiante, l’empreinte de ses campus, la grandeur de ses bibliothèques spécialisées ; c’est toutefois la renommée de ses chercheurs qui consacre la réputation de chaque université au cours de cette longue trajectoire pendant laquelle a pu s’établir la liberté universitaire. « Les libertés universitaires empruntèrent beaucoup aux libertés ecclésiastiques » : Étudiants et maîtres, qu'ils furent, ou non, hommes d'Église, furent assimilés à des clercs relevant de la seule justice ecclésiastique, réputée plus équitable. Mais ils échappèrent aussi largement à la justice ecclésiastique locale, n'étant justiciables que devant leur propre institution les professeurs et le recteur, chef élu de l’université - ou devant le pape ou ses délégués. Les libertés académiques marquèrent donc l’émergence d'un droit propre, qui ménageait aux maîtres et aux étudiants une place à part dans la société. Ce droit était le même, à travers l'Occident, pour tous ceux qui appartenaient à ces institutions supranationales que furent, par essence, les premières universités. À la fin du Moyen Âge, l'affirmation des États nationaux obligea les libertés académiques à s'inscrire dans ce nouveau cadre politique, comme de simples pratiques dérogatoires au droit commun et toujours sujettes à révision. Vestige vénérable de l’antique indépendance et privilège octroyé par le prince, elles eurent donc désormais un statut ambigu » . La révolution numérique viendra fragiliser ce statut. En effet, la révolution numérique vient bouleverser cette longue trajectoire linéaire de l’université en lui enlevant son quasi monopole dans la conservation et le partage du savoir parce qu’elle rend plus facile et somme toute, moins coûteux l’accès à l’information, au savoir et aux données. Le numérique est révolutionnaire comme l’était l’imprimé et son influence sur l’université, sera tout aussi considérable, car cette révolution impacte radicalement tous les secteurs de l’économie en accélérant la robotisation et la numérisation des processus de création, de fabrication et de distribution des biens et des services. Ces innovations utilisent la radio-identification (RFID) qui permet de mémoriser et de récupérer à distance des données sur les objets et l’Internet des objets qui permet aux objets d’être reliés automatiquement à des réseaux de communications .Ces innovations s’entrecroisent aux technologies de la réalité virtuelle, à celles des algorithmiques intelligentes et de l’intelligence artificielle et viennent littéralement inonder de données les institutions et les organisations qui doivent alors les analyser, les gérer et les protéger. Le monde numérique est né et avec lui, a surgi toute une série de compétences radicalement nouvelles que les étudiants, les enseignants et les chercheurs de nos universités doivent rapidement maîtriser pour évoluer dans ce Nouveau Monde, y travailler et contribuer à la rendre plus humain et plus équitable. En effet, tous les secteurs de l’activité commerciale, économique, culturelle ou sociale exigent déjà clairement des connaissances et des compétences numériques et technologiques de tous les participants au marché du travail. Dans cette nouvelle logique industrielle du monde numérique, les gagnants sont déjà bien identifiés. Ce sont les fameux GAFAM (Google, Apple, Facebook, Amazon et Microsoft) suivis de près par les NATU (Netflix, Airbnb, Tesla et Uber) et par les géants chinois du numérique, les BATX (Baidu, Alibaba, Tenant et Xiaomi). Ces géants sont alimentés par les recherches, les innovations et les applications mobiles (APPs) créées par les partenaires de leurs écosystèmes regroupant, sur différents campus d’entreprises, plusieurs des cerveaux qui sont au cœur de cette révolution numérique. L’université voit donc remise en question sa capacité traditionnelle d’attirer, de retenir et de promouvoir les artisans du monde de demain. Son aptitude à former des esprits critiques et à contribuer à la transmission des valeurs universelles est également ébranlée par ce tsunami de changements. Il faut cependant reconnaître que les facultés de médecine, d’ingénierie et de sciences naturelles aux États-Unis qui ont développé des contacts étroits, abondants et suivis avec les hôpitaux, les grandes entreprises et l’administration publique et cela dès la fin du 19e siècle ont été plus en mesure que bien d’autres, de recruter et retenir les gens de talent. Elle ont énormément contribué à faire avancer les connaissances scientifiques et la scolarisation en sciences appliquées ..La concentration inouïe des Prix Nobel scientifiques aux États-Unis est à cet égard très convaincante . La révolution numérique contemporaine survient également au moment même où de grands bouleversements frappent la planète : l’urgence climatique, le vieillissement des populations, la « déglobalisation », les déplacements des populations, les guerres, les pandémies, la crise des inégalités, de l’éthique et des démocraties. Ces bouleversements interpellent les universitaires et c’est pourquoi leur communauté doit adopter une raison d’être et ainsi renouveler leur mission afin des mieux répondre à ces enjeux de la civilisation. Cette communauté doit non seulement se doter d’une vision et des modes de fonctionnement adaptés aux nouvelles réalités liées aux technologies numériques, mais elle doit aussi tenir compte de ces grands bouleversements. Tout ceci l’oblige à s’intégrer à des écosystèmes où les connaissances sont partagées et où de nouvelles compétences doivent être rapidement acquises. Le but de ce texte est de mieux cerner l’ampleur du défi que pose le monde numérique au milieu universitaire et de proposer quelques idées pouvant alimenter la réflexion des universitaires dans cette démarche d’adaptation au monde numérique. Or, ma conviction la plus profonde c’est que la révolution numérique aura des impacts sur nos sociétés et notre civilisation aussi grands que ceux provoqués par la découverte de l’imprimerie et son industrialisation au 15e siècle. C’est pourquoi la première section de ce document est consacrée à un rappel historique de la révolution de l’imprimerie par Gutenberg alors que la deuxième section illustrera comment les caractéristiques de la révolution numérique viennent soutenir cette conviction si profonde. Une troisième section fournira plus de détails sur le défi d’adaptation que le monde numérique pose aux universités alors que la quatrième section évoquera les contours du changement de paradigme que cette adaptation va imposer. La cinquième section servira à illustrer un scénario de rêves qui permettra de mieux illustrer l’ampleur de la gestion du changement qui guette les universitaires. La conclusion permettra de revenir sur quelques concepts et principes clefs pour guider la démarche vers l’action. L’université ne peut plus « être en haut et seule », elle doit être « au centre et avec » des écosystèmes de partenariats multiples, dans un modèle hybride physique/virtuel. C’est ainsi qu’elle pourra conserver son leadership historique de vigie du savoir et des connaissances d’un monde complexe, continuer d’établir l’authenticité des faits et imposer la nécessaire rigueur de la science et de l’objectivité.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography