Academic literature on the topic 'Friday 13th failure modelling'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Friday 13th failure modelling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Friday 13th failure modelling"

1

Abdul-Halim, Nadiya, and Kenneth R. Davey. "A Friday 13th risk assessment of failure of ultraviolet irradiation for potable water in turbulent flow." Food Control 50 (April 2015): 770–77. http://dx.doi.org/10.1016/j.foodcont.2014.10.036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Davey, Kenneth R., Saravanan Chandrakash, and Brian K. O’Neill. "A Friday 13th failure assessment of clean-in-place removal of whey protein deposits from metal surfaces with auto-set cleaning times." Chemical Engineering Science 126 (April 2015): 106–15. http://dx.doi.org/10.1016/j.ces.2014.12.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jacobs, Jeffrey P. "Introduction: December 2015 HeartWeek Issue of Cardiology in the Young – Highlights of HeartWeek 2015: Challenges and Dilemmas of Pediatric Cardiac Care including Heart Failure in Children and Congenital Abnormalities of the Coronary Arteries." Cardiology in the Young 25, no. 8 (December 2015): 1441–55. http://dx.doi.org/10.1017/s1047951115002310.

Full text
Abstract:
AbstractThis December Issue of Cardiology in the Young represents the 13th annual publication in Cardiology in the Young generated from the two meetings that composed “HeartWeek in Florida”. “HeartWeek in Florida”, the joint collaborative project sponsored by the Cardiac Centre at the Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania, together with Johns Hopkins All Children’s Heart Institute of Saint Petersburg, Florida, averages over 1000 attendees every year and is now recognised as one of the major planks of continuing medical and nursing education for those working in the fields of diagnosis and treatment of cardiac disease in the foetus, neonate, infant, child, and adult. “HeartWeek in Florida” combines the International Symposium on Congenital Heart Disease, organised by All Children’s Hospital and Johns Hopkins Medicine, and entering its 16th year, with the Annual Postgraduate Course in Paediatric Cardiovascular Disease, organised by The Children’s Hospital of Philadelphia entering its 19th year.This December 2015 Issue of Cardiology in the Young features highlights of the two meetings that compose HeartWeek. Johns Hopkins All Children’s Heart Institute’s 15th Annual International Symposium on Congenital Heart Disease was held at the Renaissance Vinoy Resort & Golf Club, Saint Petersburg, Florida, from Friday, 6 February, 2015, to Monday, 9 February, 2015. This Symposium was co-sponsored by The American Association for Thoracic Surgery and its special focus was “Congenital Abnormalities of the Coronary Arteries”. The Children’s Hospital of Philadelphia’s annual meeting – Cardiology 2015, the 18th Annual Update on Paediatric and Congenital Cardiovascular Disease: “Challenges and Dilemmas” – was held at the Hyatt Regency Scottsdale Resort and Spa at Gainey Ranch, Scottsdale, Arizona, from Wednesday, 11 February, 2015, to Sunday, 15 February, 2015.We would like to acknowledge the tremendous contributions made to paediatric and congenital cardiac care by Juan Valentín Comas, MD, PhD (13 May, 1960 to 16 June, 2015) and Donald Nixon Ross, FRCS (4 October, 1922 to 7 July, 2014); and therefore, we dedicate this December 2015 HeartWeek Issue of Cardiology in the Young to them.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Friday 13th failure modelling"

1

Abdul, Halim Nadiya. "Quantitative Fr 13 Failure Modelling of Uv Irradiation for Potable Water Production – Demonstrated with Escherichia Coli." Thesis, 2017. http://hdl.handle.net/2440/119334.

Full text
Abstract:
Steady-state ultraviolet (UV) irradiation for potable water production is becoming an important global alternative to traditional disinfection by chlorination. Failure of UV to reduce the number of viable contaminant pathogens however can lead to enduring health legacies (with or without fatalities). To better understand vulnerability of UV operations to failure, the probabilistic Fr 13 risk framework of Davey and co-workers1 is applied for the first time in this thesis. Fr 13 is predicated on underlying chemical engineering unit-operations. It is based on the hypothesis that naturally occurring, chance (stochastic) fluctuations about the value of ‘set’ process parameters can unexpectedly combine and accumulate in one direction and leverage significant change across a binary ‘failure– not failure’ boundary. Process failures can result from the accumulation of these fluctuations within an apparent steady-state process itself. That is to say, even with good design and operation of plant, there can be unexpected (surprise and sudden) occasional failures without ‘human error’ or ‘faulty fittings’. Importantly, the impact of these naturally occurring random fluctuations is not accounted for explicitly in traditional chemical engineering. Here, the Fr 13 risk framework is applied for the first time to quantitatively assess operations of logically increasing complexity, namely, a laminar flow-through UV reactor, with turbulent flow in a concentric annular-reactor, both with and without suspended solids present (Davey, Abdul-Halim and Lewis, 2012; Davey and Abdul-Halim, 2013; Abdul-Halim and Davey, 2015; 2016), and; a two-step ‘global’ risk model of combined rapid-sand-filtration and UV irradiation (SF-UV) (Abdul-Halim and Davey, 2017). The work is illustrated with extensive independent data for the survival of viable Escherichia coli - a pathogenic species of faecal bacteria widely used as an indicator for health risk. A logical and step-wise approach was implemented as a research strategy. UV reactor unit-operations models are first synthesized and developed. A failure factor is defined in terms of the design reduction and actual reduction in viable E. coli contaminants. UV reactor operation is simulated using a refined Monte Carlo (with Latin Hypercube) sampling of UV lamp intensity (I), suspended solids concentrations [conc] and water flow (Q). A preliminary Fr 13 failure simulation of a single UV reactor unit-operation (one-step), developed for both simplified laminar flow and turbulent flow models, showed vulnerability to failure with unwanted survival of E. coli of, respectively, 0.4 % and 16 %, averaged over the long term, of all apparently successful steady-state continuous operations. A practical tolerance, as a design margin of safety, of 10 % was assumed. Results from applied ‘second-tier’ studies to assess re-design to improve UV operation reliability and safety and to reduce vulnerability to Fr 13 failure showed that any increased costs to improve control and reduce fluctuations in raw feed-water flow, together with reductions in UV lamp fluence, would be readily justified. The Fr 13 analysis was shown to be an advance on alternate risk assessments because it produced all possible practical UV outcomes, including failures. A more developed and practically realistic model for UV irradiation for potable water production was then synthesized to investigate the impact of the presence of suspended solids (SS) (median particle size 23 μm) as UV shielding and UV absorbing agents, on overall UV efficacy. This resulted in, respectively, some 32.1 % and 43.7 %, of apparent successful operations could unexpectedly fail over the long term due, respectively, to combined impact of random fluctuations in feed-water flow (Q), lamp intensity (I0) and shielding and absorption of UV by SS [conc]. This translated to four (4) failures each calendar month (the comparison rate without suspended solids was two (2) failures per month). Results highlighted that the efficacy of UV irradiation decreased with the presence of SS to 1-log10 reduction, compared with a 4.35-log10 reduction without solids present in the raw feed-water. An unexpected outcome was that UV failure is highly significantly dependent on naturally occurring fluctuations in the raw feed-water flow, and not on fluctuations in the concentration of solids in the feed-water. It was found that the initial presence of solids significantly reduced the practically achievable reductions in viable bacterial contaminants in the annular reactor, but that fluctuations in concentration of solids in the feed-water did not meaningfully impact overall vulnerability of UV efficacy. This finding pointed to a pre-treatment that would be necessary to remove suspended solids prior to the UV reactor, and; the necessity to improve control in feed-water flow to reduce fluctuations. The original synthesis was extended therefore for the first time to include a rapid sand-filter (SF) for pre-treatment of the raw feed-water flow to the UV reactor, and; a Fr 13 risk assessment on both the SF, and sequential, integrated rapid sand-filtration and UV reactor (SF-UV). For the global two-step SF-UV results showed vulnerability to failure of some 40.4 % in overall operations over the long term with a safety margin (tolerance) of 10 %. Pre-treatment with SF removed SS with a mean of 1-log10 reduction (90 %). Subsequently, an overall removal of viable E. coli from the integrated SF-UV reactor was a 3-log10 reduction (99.9 %). This is because the efficacy of UV light to penetrate and inactivate viable E. coli, and other pathogens, is not inhibited by SS in the UV reactor. This showed that the physical removal of E. coli was accomplished by a properly functioning SF and subsequently disinfection was done by UV irradiation to inactivate viable E. coli in the water. Because the Regulatory standard for potable water is a 4-log10 reduction, it was concluded that flocculation and sedimentation prior to SF was needed to exploit these findings. Flocculation is a mixing process to increase particle size from submicroscopic microfloc to visible suspended particles prior to sedimentation and SF. This research will aid understanding of factors that contribute to UV failure and increase confidence in UV operations. It is original, and not incremental, work. Findings will be of immediate interest to risk analysts, water processors and designers of UV reactors for potable water production.
Thesis (Ph.D.) -- University of Adelaide, School of Chemical Engineering & Advanced Materials, 2017
APA, Harvard, Vancouver, ISO, and other styles
2

Chandrakash, Saravanan. "A new risk analysis of clean-in-place (CIP) milk processing." Thesis, 2012. http://hdl.handle.net/2440/76140.

Full text
Abstract:
The food and pharmaceutical industry are generally a nation’s largest manufacturing sector – and importantly one of the most stable. Clean-In-Place (CIP)² is a ubiquitous process in milk processing as thorough cleaning of wet surfaces of equipment is an essential part of daily operations. Faulty cleaning can have serious consequences as milk acts as an excellent substrate in which unwanted micro-organisms can grow and multiply rapidly. Davey & Cerf (2003) introduced the notion of Friday 13th Syndrome³ i.e. the unexpected failure of a well-operated process plant by novel application of Uncertainty Failure Modelling (Davey, 2010; 2011). They showed that failure cannot always be put down to human error or faulty fittings but could be as a result of stochastic changes inside the system itself. In this study a novel CIP failure model based on the methodology of Davey and co-workers is developed using the published models of Bird & Fryer (1991); Bird (1992) and Xin (2003); Xin, Chen & Ozkan (2004) for the first time. The aim was to gain insight into conditions that may lead to unexpected failure of an otherwise well-operated CIP plant. CIP failure is defined as failure to remove proteinaceous deposits on wet surfaces in the auto-set cleaning time. The simplified two-stage model of Bird & Fryer (1991) and Bird (1992) was initially investigated. This model requires input of the thickness of the deposit (δ = 0.00015 m) and the temperature and Re of the cleaning solution (1.0-wt% NaOH). The deposit is considered as two layers: an upper layer of swelled deposit which can be removed (xδ) by the shear from the circulating cleaning solution and a lower layer (yδ) that is not yet removable. The output parameters of particular interest are the rate of deposit removal (R) and total cleaning time (t[subscript]T) needed to remove the deposit. The more elaborate three-stage model of Xin (2003) and Xin, Chen & Ozkan (2004) is based on a polymer dissolution process. This model requires input values of temperature of the cleaning solution (T), critical mass of the deposit (m[subscript]c) and cleaning rate (R[subscript]m). The output parameters of particular interest are the rate of removal during swelling and uniform stage (R[subscript]SU), the rate of removal during decay stage (R[subscript]D) and the total cleaning time needed to remove the deposit (t[subscript]T). The two CIP models are appropriately formatted and simulations used to validate them as a unit-operation. A risk factor (p) together with a practical process tolerance is defined in terms of the auto-set CIP time to remove a specified deposit and the actual cleaning time as affected by stochastic changes within the system (t[subscript]T'). This is computationally convenient as it can be articulated so that all values p > 0 highlight an unwanted outcome i.e. a CIP failure. Simulations for the continuous CIP unit-operation are carried out using Microsoft Excel™ spreadsheet with an add-in @Risk™ (pronounced ‘at risk’) version 5.7 (Palisade Corporation) with some 100,0004 iterations from Monte Carlo sampling of input parameters. A refined Latin Hypercube sampling is used because ‘pure’ Monte Carlo samplings can both over- and under-sample from various parts of a distribution. Values of the input parameters took one of the two forms. The first was the traditional Single Value Assessment (SVA) as defined by Davey (2011) in which a single, ‘best guess’ or mean value of the parameter is used. The output therefore is a single value. The alternate form was a Monte Carlo Assessment (MCA) (Davey, 2011) in which the ‘best guess’ values take the form of a probability distribution around the mean value. Many thousands of randomly sampled values for each input parameter are obtained using Monte Carlo sampling. Generally, in QRA the input parameters take the form of a distribution of values. The output therefore is a distribution of values with each assigned a probability of actually occurring. The values of all inputs are carefully chosen for a realistic simulation of CIP. Results reveal that a continuous CIP unit-operation is actually a mix of successful cleaning operations along with unsuccessful ones, and that these can tip unexpectedly. For example for the unit-operations model of Bird & Fryer (1991) and Bird (1992) failure to remove a proteinaceous milk deposit (δ = 0.00015 m) can occur unexpectedly in 1.0% of all operations when a tolerance of 6% is allowed on the specified auto-set cleaning time (t[subscript]T = 914 s) with a cleaning solution temperature of 60 °C. Using Xin, Chen & Ozkan (2004) model as the underlying unit-operation some 1.9% of operations at a nominal mid-range cleaning solution temperature of 75 °C could fail with a tolerance of 2% on the auto-set CIP time (t[subscript]T = 448 s). Extensive analyses of comparisons of the effect of structure of the two CIP unit-operations models on predictions at similar operating conditions i.e. 2% tolerance on the auto-set clean time (~ 656 s) and 1%-sd in the nominal mean temperature of the NaOH cleaning solution at 65 °C, highlighted that the underlying vulnerability to failure of the simplified model of Bird & Fryer (1991) and Bird (1992) was 1.8 times that of the more elaborate model of Xin (2003) and Xin, Chen & Ozkan (2004). The failure analysis presented in this thesis represents a significant advance over traditional analysis in that all possible practical scenarios that could exist operationally are computed and rigorous quantitative evidence is produced to show that a continuous CIP plant is actually a mix of failed cleaning operations together with successful ones. This insight is not available from traditional methods (with or without sensitivity analysis). Better design and operating decisions can therefore be made because the engineer has a picture of all possible outcomes. The quantitative approach and insight presented here can be used to test re-designs to reduce cleaning failure through changes to the plant including improved temperature and auto-set time control methods. 2 see Appendix A for a definition of some important terms used in this research. 3 Unexpected (unanticipated) failure in plant or product of a well-operated, well-regulated unit-operation. 4 Experience with the models highlighted that stable output values would be obtained with 100,000 iterations (or CIP ‘scenarios’).
Thesis (M.Eng.Sc.) -- University of Adelaide, School of Chemical Engineering, 2012
APA, Harvard, Vancouver, ISO, and other styles
3

Chu, James Yick Gay. "Synthesis and experimental validation of a new probabilistic strategy to minimize heat transfers used in conditioning of dry air in buildings with fluctuating ambient and room occupancy." Thesis, 2017. http://hdl.handle.net/2440/114256.

Full text
Abstract:
Steady-state unit-operations are globally used in chemical engineering. Advantages include ease of control and a uniform product quality. Nonetheless there will be naturally occurring, random (stochastic) fluctuations about any steady-state ‘set’ value of a process parameter. Traditional chemical engineering does not explicitly take account of these. This is because, generally, fluctuation in one parameter appears to be off-set by change in another – with the process outcome remaining apparently steady. However Davey and co-workers (e.g. Davey et al., 2015; Davey, 2015 a; Zou and Davey, 2016; Abdul-Halim and Davey, 2016; Chandrakash and Davey, 2017 a) have shown these naturally occurring fluctuations can accumulate and combine unexpectedly to leverage significant impact and thereby make apparently well-running processes vulnerable to sudden and surprise failure. They developed a probabilistic and quantitative risk framework they titled Fr 13 (Friday 13th) to underscore the nature of these events. Significantly, the framework can be used in ‘second-tier’ studies for re-design to reduce vulnerability to failure. Here, this framework is applied for the first time to show how naturally occurring fluctuations in peak ambient temperature (T₀) and occupancy (room traffic flows) (Lᴛ) can impact heat transfers for conditioning of room air. The conditioning of air in large buildings, including hotels and hospitals, is globally important (Anon., 2012 a). The overarching aim is to quantitatively ‘use’ these fluctuations to develop a strategy for minimum energy. A justification is that methods that permit quantitative determination of reliable strategies for conditioning of air can lead to better energy use, with potential savings, together with reductions in greenhouse gases (GHG). Oddly many buildings do not appear to have a quantitative strategy to minimize conditioning heat transfers. Wide-spread default practice is to simply use an on-off strategy i.e. conditioning-on when the room is occupied and conditioning-off, when un-occupied. One alternative is an on-only strategy i.e. leave the conditioner run continuously. A logical and stepwise combined theoretical-and-experimental, approach was used as a research strategy. A search of the literature showed that work had generally focused on discrete, deterministic aspects and not on mathematically rigorous developments to minimise overall whole-of-building conditioning heat transfers. A preliminary steady-state convective model was therefore synthesized for conditioning air in a (hotel) room (4.5 x 5.0 x 2.5, m) in dry, S-E Australia during summer (20 ≤ T₀ ≤ 40, °C) to an auto-set room bulk temperature of 22 °C for the first time. This was solved using traditional, deterministic methods to show the alternative on-only strategy would use less electrical energy than that of the default on-off for Lᴛ > 36 % (Chu et al., 2016). Findings underscored the importance of the thermal capacitance of a building. The model was again solved using the probabilistic Fr 13 framework in which distributions to mimic fluctuations in T₀ and Lᴛ were (reasonably) assumed and a new energy risk factor (p) was synthesized such that all p > 0 characterized a failure in applied energy strategy (Chu and Davey, 2015). Predictions showed on-only would use less energy on 86.6 % of summer days. Practically, this meant that a continuous on-only strategy would be expected to fail in only 12 of the 90 days of summer, averaged over the long term. It was concluded the Fr 13 framework was an advance over the traditional, deterministic method because all conditioning scenarios that can practically exist are simulated. It was acknowledged however that: 1) a more realistic model was needed to account for radiative heat transfers, and; 2) to improve predictive accuracy, local distributions for T₀ and Lᴛ were needed. To address these: 1) the model was extended mathematically to account for radiative transfers from ambient to the room-interior, and; 2) distributions were carefully-defined based on extensive historical data for S-E Australia from, respectively, Bureau of Meteorology (BoM) (Essendon Airport) and Clarion Suites Gateway Hotel (CSGH) (Melbourne) – a large (85 x 2-room suites) commercial hotel (latitude -37.819708, longitude 144.959936) – for T₀ and Lᴛ for 541 summer days (Dec. 2009 to Feb. 2015) (Chu and Davey, 2017 a). Predictions showed that radiative heat transfers were significant and highlighted that for Lᴛ ≥ 70 %, that is, all commercially viable occupancies, the on-only conditioning strategy would be expected to use less energy. Because findings predicted meaningful savings with the on-only strategy, ‘proof-of-concept’ experiments were carried out for the first time in a controlled-trial in-situ in CSGH over 10 (2 x 5 contiguous) days of summer with 24.2 ≤ T₀ ≤ 40.5, °C and 13.3 ≤ Lᴛ ≤ 100, %. Independent invoices (Origin Energy Ltd, or Simply Energy Ltd, Australia) (at 30 min intervals from nationally registered ‘smart’ power meters) for geometrically identical control and treated suites showed a mean saving of 18.9 % (AUD $2.23 per suite per day) with the on-only strategy, with a concomitant 20.7 % reduction (12.2 kg CO₂-e) in GHG. It was concluded that because findings supported model predictions, and because robust experimental SOPs had been established and agreed by CSGH, a large-scale validation test of energy strategies should be undertaken in the hotel. Commercial-scale testing over 77 contiguous days of summer (Jan. to Mar., 2016) was carried out in two, dimensionally-identical 2-room suites, with the same fit-out and (S-E) aspect, together with identical air-conditioner (8.1 kW) and nationally registered meters to automatically transmit contiguous (24-7) electrical use (at 30 min intervals) (n = 3,696) for the first time. Each suite (10.164 x 9.675, m floor plan) was auto-set to a bulk air temperature of 22 °C (Chu and Davey, 2017 b). In the treated suite the air-conditioner was operated on-only, whilst in the control it was left to wide-spread industry practice of on-off. The suites had (standard) single-glazed pane windows with heat-attenuating (fabric) internal curtains. Peak ambient ranged from 17.8 ≤ T₀ ≤ 39.1, °C. There were 32 days with recorded rainfall. The overall occupancy Lᴛ of both suites was almost identical at 69.7 and 71.2, % respectively for the treated and control suite. Importantly, this coincided with a typical business period for the CSGH hotel. Based on independent electrical invoices, results showed the treated suite used less energy on 47 days (61 %) of the experimental period, and significantly, GHG was reduced by 12 %. An actual reduction in electrical energy costs of AUD $0.75 per day (9 %) averaged over the period was demonstrated for the treated suite. It was concluded therefore that experimental findings directly confirmed the strategy hypothesis that continuous on-only conditioning will use less energy. Although the hypothesis appeared generalizable, and adaptable to a range of room geometries, it was acknowledged that a drawback was that extrapolation of results could not be reliably done because actual energy used would be impacted by seasons. The in-situ commercial-scale experimental study was therefore extended to encompass four consecutive seasons. The research aim was to provide sufficient experimental evidence (n = 13,008) to reliably test the generalizability of the on-only hypothesis (Chu and Davey, 2017 c). Ambient peak ranged from 9.8 ≤ T₀ ≤ 40.5, °C, with rainfall on 169 days (62 %). Overall, Lᴛ was almost identical at 71.9 and 71.7, % respectively, for the treated and control suite. Results based on independent electrical energy invoices showed the on-only strategy used less energy on 147 days (54 %) than the on-off. An overall mean energy saving of 2.68 kWh per suite per day (9.2 %) (i.e. AUD $0.58 or 8.0 %) with a concomitant reduction in indirect GHG of 3.16 kg CO₂-e was demonstrated. Extrapolated for the 85 x 2-room suites of the hotel, this amounted to a real saving of AUD $18,006 per annum - plus credit certificates that could be used to increase savings. Overall, it was concluded therefore the on-only conditioning hypothesis is generalizable to all seasons, and that there appears no barrier to adaption to a range of room geometries. Highly significantly, the methodology could be readily applied to existing buildings without capital outlays or increases in maintenance. A total of five (5) summative research presentations of results and findings were made to the General Manager and support staff of CSGH over the period to July 2017 inclusive (see Appendix I) that maintained active industry-engagement for the study. To apply these new findings, the synthesis of a computational algorithm in the form of a novel App (Anon., 2012 b; Davey, 2015 b) was carried out for the first time (Chu and Davey, 2017 d). The aim was to demonstrate an App that could be used practically to minimize energy in conditioning of dry air in buildings that must maintain an auto-set temperature despite the impact of fluctuations in T₀ and Lᴛ . The App was synthesized from the extensive experimental commercial-scale data and was applied to compute energy for both strategies from independently forecast T₀ and Lᴛ . Practical performance of the App was shown to be dependent on the accuracy of locally forecast T₀ and Lᴛ . Overall results predicted a saving of 2.62 kWh per 2-room suite per day ($47,870 per annum for CSGH) where accuracy of forecast T₀ is 77 % and Lᴛ is 99 %, averaged over the long term. A concomitant benefit was a predicted reduction greenhouse emissions of 3.1 kg CO₂-e per day. The App appears generalizable – and importantly it is not limited by any underlying heat-model. Its predictive accuracy can be refined with accumulation of experimental data for a range of geo-locations and building-types to make it globally applicable. It was concluded that the App is a useful new tool to minimize energy transfers in conditioning of room dry air in large buildings – and could be readily developed commercially 6. Importantly, it can be applied without capital outlays or additional maintenance cost and at both design and analysis stages. This research is original and not incremental work. Results of this research will be of immediate benefit to risk analysts, heat-design engineers, and owners and operators of large buildings.
Thesis (Ph.D.) -- University of Adelaide, School of Chemical Engineering, 2018
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Friday 13th failure modelling"

1

Bartoli, Gianni, Michele Betti, Saverio Giordano, and Maurizio Orlando. "In-Situ Static and Dynamic Testing and Numerical Modelling of the Dome of the Siena Cathedral (Italy)." In Handbook of Research on Seismic Assessment and Rehabilitation of Historic Structures, 85–114. IGI Global, 2015. http://dx.doi.org/10.4018/978-1-4666-8286-3.ch004.

Full text
Abstract:
The chapter reports on the in-situ experimental campaign and the numerical modelling that were performed to assess the static and dynamic behaviour of the Cupola of the Siena Cathedral in Italy: an irregular polygonal masonry structure built in the 13th century and composed of two domes. The research was motivated by the failure of some of the stone-trusses which connect the two masonry domes and consists of: a) single and double flat-jack tests in the internal dome, b) dynamic vibration tests on the Cupola under environmental (wind) and artificial (vibrodyne) loads and c) dynamic vibration tests on the double colonnade located below the Cupola (hammer impact tests). Results of tests were employed to identify a numerical model of the Cupola, which allowed to simulate its structural behaviour and to account for the failure of the stone-trusses between the two domes. The numerical model was later extended to the whole Cathedral. Through the discussion of an emblematic case study, the chapter shows a careful application of non-destructive testing (NDT) and numerical modelling in the field of assessment (and rehabilitation) of heritage buildings.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Friday 13th failure modelling"

1

Zhang, Xinfang, Allan Okodi, Leichuan Tan, Juliana Leung, and Samer Adeeb. "Failure Pressure Prediction of Cracks in Corrosion Defects Using XFEM." In 2020 13th International Pipeline Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/ipc2020-9312.

Full text
Abstract:
Abstract Coating and cathodic protection degradation can result in the generation of several types of flaws in pipelines. With the increasing number of aging pipelines, such defects can constitute serious concerns for pipeline integrity. When flaws are detected in pipelines, it is extremely important to have an accurate assessment of the associated failure pressure, which would inform the appropriate remediation decision of repairing or replacing the defected pipelines in a timely manner. Cracks-in-corrosion (CIC) represent a class of defect, for which there are no agreed upon method of assessment, with no existing analytical or numerical models to predict their failure pressures. This paper aims to create a set of validated numerical finite element analysis models that are suitable for accurately predicting the failure pressure of 3D cracks-in-corrosion defects using the eXtended Finite Element Method (XFEM) technique. The XFEM for this study was performed using the commercially available software package, ABAQUS Version 6.19. Five burst tests of API 5L X60 specimens with different defect depths (varying from 52% to 66%) that are available in the literature were used to calibrate the XFEM damage parameters (the maximum principal strain and the fracture energy). These parameters were varied until a reasonable match between the numerical results and the experimental measurements was achieved. Symmetry was used to reduce the computation time. A longitudinally oriented CIC defect was placed at the exterior of the pipe. The profile of the corroded area was assumed to be semi-elliptical. The pressure was monotonically increased in the XFEM model until the crack or damage reached the inner surface of the pipe. The results showed that the extended finite element predictions were in good agreement with the experimental data, with an average error of 5.87%, which was less conservative than the reported finite element method predictions with an average error of 17.4%. Six more CIC models with the same pipe dimension but different crack depths were constructed, in order to investigate the relationship between crack depth and the failure pressure. It was found that the failure pressure decreased with increasing crack depth; when the crack depth exceeded 75% of the total defect depth, the CIC defect could be treated as crack-only defects, since the failure pressure for the CIC model approaches that for the crack-only model for ratios of the crack depth to the total defect depth of 0.75 and 1. The versatility of several existing analytical methods (RSTRENG, LPC and CorLAS) in predicting the failure pressure was also discussed. For the corrosion-only defects, the LPC method predicted the closest failure pressure to that obtained using XFEM (3.5% difference). CorLAS method provided accurate results for crack-only defects with 7% difference. The extended finite element method (XFEM) was found to be very effective in predicting the failure pressure. In addition, compared to the traditional Finite Element Method (FEM) which requires extremely fine meshes and is impractical in modelling a moving crack, the XFEM is computationally efficient while providing accurate predictions.
APA, Harvard, Vancouver, ISO, and other styles
2

Wood, Chris, Fernando Merotto, Brian Kerrigan, Ramon Loback, and Pedro Gea. "Getting to Know Your Bends to Support SCC Management." In 2020 13th International Pipeline Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/ipc2020-9578.

Full text
Abstract:
Abstract Nova Transportadora do Sudeste (NTS) own and operate a gas transmission system in Brazil constructed in 1996. One of the confirmed primary integrity threats to this system is axial stress corrosion cracking. The pipelines vary in diameter, weld type, manufacturer and age. One of the pipelines failed in 2015 due to an axial stress corrosion crack. Since the failure, NTS have executed an intense inspection campaign to detect and size axial cracking within their network. The 2015 failure occurred on a field bend. The inspection campaign and following dig campaign has confirmed that cracking (both axial and circumferential) within field bends is the primary integrity threat. Brazil has a challenging terrain and approximately 40% of joints within the network were subject to cold field bending. The influences of the pipeline geometry within these areas have resulted in localised elevated stresses where the axial stress corrosion cracking colonies are initiating and growing. To date, no cracking (axial or circumferential) has been verified within their straight pipe joints. NTS initially took a conservative baseline assessment approach using API 579 Part 9, due to the limited information regarding the pipe material and complex stress state. In addition to the hoop stress from internal pressure, the baseline assessment also considered weld residual stress and bending stress due to ovalization to determine immediate and future integrity. An intensive dig campaign is underway following a crack detection in-line inspection campaign using electromagnetic acoustic transducer technology. A large number of deep cracks were reported by the in-line inspection system, these were verified to be deep and repaired with a type B sleeve. However, at one site an entire joint was removed for further analysis, to investigate the crack morphology, confirm material properties and refine the predictive failure pressure modelling. This paper outlines how NTS have combined a burst test, mechanical testing, FEA modelling, fractography and metallographic examination to further understand the feature morphology and stresses within these areas and how they have been able to reduce conservatism from their baseline assessment with confidence and adopt a plastic collapse approach to accurately predict failure.
APA, Harvard, Vancouver, ISO, and other styles
3

Tiku, Sanjay, Arnav Rana, Binoy John, and Aaron Dinovitzer. "Dent Screening Criteria Based on Dent Restraint, Pipe Geometry and Operating Pressure." In 2020 13th International Pipeline Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/ipc2020-9703.

Full text
Abstract:
Abstract A safety advisory (2010-01), issued by the National Energy Board (NEB) in June 2010, referenced two incidents which were a result of a fatigue crack failure that occurred within shallow dents [1]. The dents in both instances were less than 6% (of the OD). Currently, there is no consensus on how shallow dents or shallow dents with stress concentrators, as called by the ILI tool, are assessed and acted upon. BMT Canada Ltd. (BMT) was contracted by the Canadian Energy Pipeline Association (CEPA) to develop a definition for shallow dents, and two levels of screening method for the integrity assessment of shallow restrained dents and unrestrained dents. These two levels are known as CEPA Level 0 and CEPA Level 0.5 dent integrity assessment techniques that may be applied without finite element modelling or detailed calculations. The BMT dent assessment finite element (FE) modeling method was used to develop an extensive database of dents for different pipe geometries (OD/t), indenter shapes, pipe grades, and indentation depths. The results of the FE modelling were used to develop trends for the stress magnification factors (KM) across the range of pipes and dents modelled. These trends are used as the basis for the Level 0 and Level 0.5 dent screening and assessment approaches that can be used for both unrestrained dents and shallow restrained dents. The results show that for low OD/t pipe geometry and/or low spectrum severity indicator (SSI) [2] dent fatigue life may not pose an integrity threat. These dent screening approached have been adopted in the API Recommended Practice 1183 Dent Assessment and Management, that is currently under development.
APA, Harvard, Vancouver, ISO, and other styles
4

Daugevičius, Mykolas, Juozas Valivonis, and Tomas Skuturna. "The numerical analysis of the long-term behaviour of the reinforced concrete beams strengthened with carbon fiber reinforced polymer: Deflection." In The 13th international scientific conference “Modern Building Materials, Structures and Techniques”. Vilnius Gediminas Technical University, 2019. http://dx.doi.org/10.3846/mbmst.2019.009.

Full text
Abstract:
The numerical analysis of the reinforced concrete beams strengthened with CFRP is presented. The beams previously tested experimentally under long-term loading are selected for numerical simulation. The numerical modelling is performed by evaluating the beam’s work at various stages: the work stage before the long-term loading period, the work stage under the long-term load action, the work stage when the external load is removed and the work stage until failure. The work stages of all modelled beams are described in more detail. To analyse the behaviour of beams at different work stages, the numerical modelling using the phase analysis is performed. Different finite element groups are evaluated in each phase of analysis. The external load is increased, maintained and reduced. The finite elements of the CFRP layer are activated at a certain work stage for evaluating the strengthening effect. To assess the accuracy of the numerical analysis, each beam is modelled from the finite elements of various sizes. The paper presents the process of the numerical modelling and the predicted deflections. The numerically predicted deflections are compared with the deflections of the experimental study. The modelling of the behaviour of the strengthened beams has shown that the nature of the long-term deflection differs from that obtained in the experiment. The increment of the numerically predicted deflection decreases gradually over the long-term period. Meanwhile, the experimental long-term deflection increment is characterised by the sharp increase and decrease at the start of the long-term period. This contradiction shows that the experimental long-term deflections are greater. However, over time, the numerical model deflections may reach and exceed the experimental deflections due to steady increase. The smaller size of the finite elements causes the increase in the cracking moment and the higher moment when the yielding of the tensioned reinforcement occurs. However, the cracking moment obtained by the numerical modelling is much higher than that obtained by the experimental modelling. However, when the yielding strength of the tensile reinforcement is reached, the considered moment is smaller than the experimental one.
APA, Harvard, Vancouver, ISO, and other styles
5

Holliday, Chris, Andy Young, Terri Funk, and Carrie Murray. "The North Saskatchewan River Valley Landslide: Slope and Pipeline Condition Monitoring." In 2020 13th International Pipeline Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/ipc2020-9532.

Full text
Abstract:
Abstract Following a loss of containment incident in July 2016 on a 16-inch diameter pipeline on the south slope of the North Saskatchewan River located in Saskatchewan, Canada, Husky completed extensive studies to understand and learn from the failure. The cause of the incident was ground movement resulting from a landslide complex on the slope involving two deep-seated compound basal shear slides as well as a near surface translational slide in heavily over consolidated marine clays of the Upper Cretaceous Lea Park Formation. One aspect of the studies has been to undertake structural analysis of the pipeline response to the loading imposed from the ground movement to minimize the potential for a similar occurrence from happening in the future and determine the integrity of the pipeline at the time of the assessment. Given the scale and complexity of the landslide, slope stabilization measures were not practical to implement, so repeat ILI using caliper and inertial measurement technology (IMU), in addition to a robust monitoring program was implemented. Realtime monitoring of ground movements, pipe strain and precipitation levels provided a monitoring and early-warning system combined with documented risk thresholds that identified when to proactively shut-in the pipeline. The methodology and findings of the slope monitoring and structural analysis that was undertaken to examine the robustness of the pipeline to withstand future landslide movement are presented herein. The work involved modelling of the pipeline history on the slope including loads that had accumulated in the original pipeline sections based on historical ILI results and slope monitoring. The pipeline orientation was parallel with the ground movement in the landslide complex, so the development of axial strain in the pipeline was the dominant load component, which are particularly damaging in the compression zone. The work provided recommendations and technical basis to continue safe operation of the pipeline with consideration of continuing ground movement and assisted the operator with decisions over the long-term strategy for the pipeline.
APA, Harvard, Vancouver, ISO, and other styles
6

Dinovitzer, Aaron, Sanjay Tiku, and Mark Piazza. "Dent Assessment and Management: API Recommended Practice 1183." In 2020 13th International Pipeline Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/ipc2020-9724.

Full text
Abstract:
Abstract Pipeline dents can be developed from the pipe resting on rock, a third-party machinery strike, rock strikes during backfilling, amongst other causes. The long-term integrity of a dented pipeline segment is a complex function of a variety of parameters including pipe geometry, indenter shape, dent depth, indenter support, secondary features, and pipeline operating pressure history at and following indentation. In order to estimate the safe remaining operating life of a dented pipeline, all of these factors must be considered and guidelines for this assessment are not available. US DOT regulations (49 CFR 192 and 195) include dent repair and remediation criteria broadly based upon dent depth, dent location (top or bottom side), pressure cycling (liquid or gas), and dent interaction with secondary features (weld, corrosion, cracks). The criteria defined above are simple to use, however, they may not direct maintenance to higher risk dent features and be overly conservative or, in some cases, unconservative. PRCI, USDOT, CEPA and other full-scale testing, finite element modelling and engineering model development research has been completed to evaluate the integrity of pipeline dents. These results have demonstrated trends and limits in dent behavior and life that can improve on existing codified and traditional treatment of dents. With these research results a guideline for dent management can be developed to support operators develop and implement their pipeline integrity management programs. This paper provides an overview of the newly developed API recommended practice for assessment and management of dents (RP 1183). The RP considers dent formation strain, failure pressure and fatigue limit states including the effects of coincident features (i.e. welds, corrosion, cracks and gouges). This paper will focus on how pipeline operators can derive value from this step change in integrity management for dents. The paper describes the basis for the dent screening and integrity assessment tools included in the RP. This RP provides well founded techniques for engineering assessment that may be used to determine the significance of dent features, if remedial actions are required and when these actions should be taken.
APA, Harvard, Vancouver, ISO, and other styles
7

Lall, Pradeep, Nakul Kothari, and Jessica Glover. "Mechanical Shock Reliability Analysis and Multiphysics Modeling of MEMS Accelerometers in Harsh Environments." In ASME 2015 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems collocated with the ASME 2015 13th International Conference on Nanochannels, Microchannels, and Minichannels. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/ipack2015-48457.

Full text
Abstract:
MEMS accelerometers have found applications in harsh environments with pressure, temperatures above ambient conditions, high g shock and vibrations. The complex structure of these MEMS devices has made it difficult to understand the failure modes and failure mechanisms of present day MEMS accelerometers. Little work has been done by the researchers in investigating the high g reliability of the MEMS accelerometers by continuous high g drops and quantifying the failure modes. There is little literature addressing the multiphysics finite element modelling of MEMS accelerometers subjected to high g shocks. In defense applications, where these devices are integrated with several other compactly assembled subsystems, lack of knowledge on the physics of failure for the MEMS sensor in harsh environment operation, can be detrimental to the success of the system on the whole. Being able to successfully model inside of an accelerometer, enables the user to better understand the change in parameters like time delay induced in response of successive drops, change in pulse width that indicate failure, reduction in sensed g levels. Some researchers have subjected various accelerometers to repeated drops at their maximum sensing g(not high g) level, and used optical microscopy to detect damaged sensing elements [Beliveau, 1999]. Few researchers have modeled the internal structure of the MEMS device, along with the device packaging under the stresses of operation [Fang 2004, Ghisi 2008, Xiong 2008]. In this paper, a multiphysics model of capacitive and the moving elements of the accelerometer has been developed to model the change in capacitance with respect to stroke and understand the correlation with g-levels, in addition to the transient dynamic response of the accelerometer under high-g shock. This has not been much explored in the past. The accelerometer studied in the paper is the ADXL193, and subjected to repeated drops of 3000g in each 3 axes as per 2002.4 of MIL-STD-883 without preconditioning. A characteristic graph of capacitance vs accelerometer stroke has been obtained from a series of electrostatic simulations and is then used to relate g levels, capacitance, stroke deflection and voltage change using electromechanical transducer elements. The drift in the performance characteristics of the accelerometer have been measured versus the number of shock events. In addition, an attempt has been made to investigate the failure mode in the accelerometer.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography