Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Risk and Reliability Analysis.

Rozprawy doktorskie na temat „Risk and Reliability Analysis”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Risk and Reliability Analysis”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Moura, Jorge Nilo de. "Reliability assessment and risk analysis of submarine blowout preventers". Thesis, Heriot-Watt University, 2000. http://hdl.handle.net/10399/1240.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Felder, Frank Andrew. "Probabilistic risk analysis of restructured electric power systems : implications for reliability analysis and policies". Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/8257.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Technology, Management, and Policy Program, 2001.
Includes bibliographical references (p. 193-209).
Modem society requires reliable and safe operation of its infrastructure. Policymakers believe that, in many industries, competitive markets and regulatory incentives will result in system performance superior to that under command-and-control regulation. Analytical techniques to evaluate the reliability and safety of complex engineering systems, however, do not explicitly account for responses to market and regulatory incentives. In addition, determining which combination of market and regulatory incentives to use is difficult because policy analysts' understanding of complex systems often depends on uncertain data and limited models that reflect incomplete knowledge. This thesis confronts the problem of evaluating the reliability of a complex engineering system that responds to the behavior of decentralized economic agents. Using the example of restructured and partially deregulated electric power systems, it argues that existing engineering-based reliability tools are insufficient to evaluate the reliability of restructured power systems. This research finds that electricity spot markets are not perfectly reliable, that is, they do not always result in sufficient supply to meet demand. General conclusions regarding the reliability of restructured power systems that some economic analysts suggest should be the basis of reliability policies are either verified or demonstrated to be true only when applied to extremely simple and unrealistic models. New generation unit and transmission component availability models are proposed that incorporate dependent failure modes and capture the behavior of economic agents, neither of which is considered with current adequacy techniques.
(cont.) This thesis proposes the use of a probabilistic risk analysis framework as the foundation for bulk power-system-reliability policy to replace existing policy, which is an ad hoc mixture of deterministic criteria and risk-based requirements. This thesis recommends distinguishing between controlled, involuntary load curtailments and uncontrolled, involuntary load curtailments in power system reliability modeling. The Institute of Electrical and Electronics Engineers (IEEE) Reliability Test System is used to illustrate the possible impact that dependent failure modes and the behavior of economic agents have on the reliability of bulk power systems.
by Frank A. Felder.
Ph.D.
Style APA, Harvard, Vancouver, ISO itp.
3

Beser, Mehmet Resat. "A Study On The Reliability-based Safety Analysis Of Concrete Gravity Dams". Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12605786/index.pdf.

Pełny tekst źródła
Streszczenie:
Dams are large hydraulic structures constructed to meet various project demands. Their roles in both environment and the economy of a country are so important that their design and construction should be carried out for negligibly small risk. Conventional design approaches are deterministic, which ignore variations of the governing variables. To offset this limitation, high safety factors are considered that increase the cost of the structure. Reliability&ndash
based design approaches are probabilistic in nature since possible sources of uncertainties associated with the variables are identified using statistical information, which are incorporated into the reliability models. Risk analysis with the integration of risk management and risk assessment is a growing trend in dam safety. A computer program, named CADAM, which is based on probabilistic treatment of random loading and resistance terms using Monte&ndash
Carlo simulation technique, can be used for the safety analysis of gravity dams. A case study is conducted to illustrate the use of this program.
Style APA, Harvard, Vancouver, ISO itp.
4

Trayhorn, Benjamin. "Power plant system reliability analysis : applications to insurance risk selection and pricing". Thesis, Cranfield University, 2012. http://dspace.lib.cranfield.ac.uk/handle/1826/7906.

Pełny tekst źródła
Streszczenie:
Within the Speciality Engineering Insurance Field the use of engineering opinion is the main component in risk analysis for underwriting decision making. The use of risk analysis tools to quantify the risk associated with perils such as mechanical breakdown is limited. A reliability model for the risk analysis of mechanical breakdown risk for the power generation sector, PowerRAT, has been developed and its performance evaluated against historic claim data. It has proven to closely forecast actual losses over a portfolio of power plants, and differentiate between power plant type; conventional steam, simple and combined cycle gas turbine plants. Differentiation based on the factors of equipment type and policy terms has been demonstrated. A review of existing survey report methodology has shown highly variable quality of reports with significant missing information on which to make underwriting decisions. A best practice survey report contents has been proposed in order to provide a consistent level of information for comparison with other risks. The development cycle of PowerRAT has led to a proposed framework for the development of future risk assessment tools for insurance. This is built on four main areas: risk identification, data analysis, calculation methodology and insurance factors.
Style APA, Harvard, Vancouver, ISO itp.
5

Rahman, Anisur. "Modelling and analysis of reliability and costs for lifetime warranty and service contract policies". Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16460/1/Anisur_Rahman_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
Reliability of products is becoming increasingly important due to rapid technological development and tough competition in the product market. One effective way to ensure reliability of sold product/asset is to consider after sales services linked to warranty and service contract. One of the major decision variables in designing a warranty is the warranty period. A longer warranty term signals better reliability and provides higher customer/user peace of mind. The warranty period offered by the manufacturer/dealer has been progressively increasing since the beginning of the 20th Century. Currently, a large number of products are being sold with long term warranties in the form of extended warranty, warranty for used product, long term service contracts, and lifetime warranty. Lifetime warranties and service contracts are becoming more and more popular as these types of warranties provide assurance to consumer for a long reliable service and protecting consumers against poor quality and the potential high cost of failure occurring during the long uncertain life of product. The study of lifetime warranty and service contracts is important to both manufacturers and the consumers. Offering a lifetime warranty and long term service contracts incur costs to the manufacturers/service provider over the useful life of the product/contract period. This cost needs to be factored into the price/premium. Otherwise the manufacturer/ dealer will incur loss instead of profit. On the other hand, buyer/user needs to model the cost of maintaining it over the useful life and needs to decide whether these policies/service contracts are worth purchasing or not. The analysis of warranty policies and costs models associated with short-term or fixed term policies have received a lot of attention. A significant amount of academic research has been conducted in modelling policies and costs for extended warranties and warranty for used products. In contrast, lifetime warranty policies and longer term service contracts have not been studied as extensively. There are complexities in developing failure and cost models for these policies due to the uncertainties of useful life, usage pattern, maintenance actions and cost of rectifications over longer period. This thesis defines product's lifetime based on current practices. Since there is no acceptable definition of lifetime or the useful life of product in existing academic literatures, different manufacturer/dealers are using different conditions of life measures of period of coverage and it is often difficult to tell whose life measures are applicable to the period of coverage (The Magnuson-Moss Warranty Act, 1975). Lifetime or the useful life is defined in this thesis provides a transparency for the useful life of products to both manufacturers/service provider and the customers. Followed by the formulation of an acceptable definition of lifetime, a taxonomy of lifetime warranty policies is developed which includes eight different one dimensional and two dimensional lifetime warranty policies and are grouped into three major categories, A. Free rectification lifetime warranty policies (FRLTW), B. Cost Sharing Lifetime Warranty policies (CSLTW), and C. Trade in policies (TLTW). Mathematical models for predicting failures and expected costs for different one dimensional lifetime warranty policies are developed at system level and analysed by capturing the uncertainties of lifetime coverage period and the uncertainties of rectification costs over the lifetime. Failures and costs are modelled using stochastic techniques. These are illustrated by numerical examples for estimating costs to manufacturer and buyers. Various rectification policies were proposed and analysed over the lifetime. Manufacturer's and buyer's risk attitude towards a lifetime warranty price are modelled based on the assumption of time dependent failure intensity, constant repair costs and concave utility function through the use of the manufacturer's utility function for profit and the buyer's utility function for cost. Sensitivity of the optimal warranty prices are analysed with numerical examples with respect to the factors such as the buyer's and the manufacturer/dealer's risk preferences, buyer's anticipated and manufacturer's estimated product failure intensity, the buyer's loyalty to the original manufacturer/dealer in repairing failed product and the buyer's repair costs for unwarranted products. Three new service contract policies and cost models for those policies are developed considering both corrective maintenance and planned preventive maintenance as the servicing strategies during the contract period. Finally, a case study is presented for estimating the costs of outsourcing maintenance of rails through service contracts. Rail failure/break data were collected from the Swedish rail and analysed for predicting failures. Models developed in this research can be used for managerial decisions in purchasing life time warranty policies and long term service contracts or outsourcing maintenance. This thesis concludes with a brief summary of the contributions that it makes to this field and suggestions and recommendations for future research for lifetime warranties and service contracts.
Style APA, Harvard, Vancouver, ISO itp.
6

Rahman, Anisur. "Modelling and analysis of reliability and costs for lifetime warranty and service contract policies". Queensland University of Technology, 2007. http://eprints.qut.edu.au/16460/.

Pełny tekst źródła
Streszczenie:
Reliability of products is becoming increasingly important due to rapid technological development and tough competition in the product market. One effective way to ensure reliability of sold product/asset is to consider after sales services linked to warranty and service contract. One of the major decision variables in designing a warranty is the warranty period. A longer warranty term signals better reliability and provides higher customer/user peace of mind. The warranty period offered by the manufacturer/dealer has been progressively increasing since the beginning of the 20th Century. Currently, a large number of products are being sold with long term warranties in the form of extended warranty, warranty for used product, long term service contracts, and lifetime warranty. Lifetime warranties and service contracts are becoming more and more popular as these types of warranties provide assurance to consumer for a long reliable service and protecting consumers against poor quality and the potential high cost of failure occurring during the long uncertain life of product. The study of lifetime warranty and service contracts is important to both manufacturers and the consumers. Offering a lifetime warranty and long term service contracts incur costs to the manufacturers/service provider over the useful life of the product/contract period. This cost needs to be factored into the price/premium. Otherwise the manufacturer/ dealer will incur loss instead of profit. On the other hand, buyer/user needs to model the cost of maintaining it over the useful life and needs to decide whether these policies/service contracts are worth purchasing or not. The analysis of warranty policies and costs models associated with short-term or fixed term policies have received a lot of attention. A significant amount of academic research has been conducted in modelling policies and costs for extended warranties and warranty for used products. In contrast, lifetime warranty policies and longer term service contracts have not been studied as extensively. There are complexities in developing failure and cost models for these policies due to the uncertainties of useful life, usage pattern, maintenance actions and cost of rectifications over longer period. This thesis defines product's lifetime based on current practices. Since there is no acceptable definition of lifetime or the useful life of product in existing academic literatures, different manufacturer/dealers are using different conditions of life measures of period of coverage and it is often difficult to tell whose life measures are applicable to the period of coverage (The Magnuson-Moss Warranty Act, 1975). Lifetime or the useful life is defined in this thesis provides a transparency for the useful life of products to both manufacturers/service provider and the customers. Followed by the formulation of an acceptable definition of lifetime, a taxonomy of lifetime warranty policies is developed which includes eight different one dimensional and two dimensional lifetime warranty policies and are grouped into three major categories, A. Free rectification lifetime warranty policies (FRLTW), B. Cost Sharing Lifetime Warranty policies (CSLTW), and C. Trade in policies (TLTW). Mathematical models for predicting failures and expected costs for different one dimensional lifetime warranty policies are developed at system level and analysed by capturing the uncertainties of lifetime coverage period and the uncertainties of rectification costs over the lifetime. Failures and costs are modelled using stochastic techniques. These are illustrated by numerical examples for estimating costs to manufacturer and buyers. Various rectification policies were proposed and analysed over the lifetime. Manufacturer's and buyer's risk attitude towards a lifetime warranty price are modelled based on the assumption of time dependent failure intensity, constant repair costs and concave utility function through the use of the manufacturer's utility function for profit and the buyer's utility function for cost. Sensitivity of the optimal warranty prices are analysed with numerical examples with respect to the factors such as the buyer's and the manufacturer/dealer's risk preferences, buyer's anticipated and manufacturer's estimated product failure intensity, the buyer's loyalty to the original manufacturer/dealer in repairing failed product and the buyer's repair costs for unwarranted products. Three new service contract policies and cost models for those policies are developed considering both corrective maintenance and planned preventive maintenance as the servicing strategies during the contract period. Finally, a case study is presented for estimating the costs of outsourcing maintenance of rails through service contracts. Rail failure/break data were collected from the Swedish rail and analysed for predicting failures. Models developed in this research can be used for managerial decisions in purchasing life time warranty policies and long term service contracts or outsourcing maintenance. This thesis concludes with a brief summary of the contributions that it makes to this field and suggestions and recommendations for future research for lifetime warranties and service contracts.
Style APA, Harvard, Vancouver, ISO itp.
7

Kevorkian, Christopher George. "UAS Risk Analysis using Bayesian Belief Networks: An Application to the VirginiaTech ESPAARO". Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/73047.

Pełny tekst źródła
Streszczenie:
Small Unmanned Aerial Vehicles (SUAVs) are rapidly being adopted in the National Airspace (NAS) but experience a much higher failure rate than traditional aircraft. These SUAVs are quickly becoming complex enough to investigate alternative methods of failure analysis. This thesis proposes a method of expanding on the Fault Tree Analysis (FTA) method to a Bayesian Belief Network (BBN) model. FTA is demonstrated to be a special case of BBN and BBN can allow for more complex interactions between nodes than is allowed by FTA. A model can be investigated to determine the components to which failure is most sensitive and allow for redundancies or mitigations against those failures. The introduced method is then applied to the Virginia Tech ESPAARO SUAV.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
8

Syrri, Angeliki Lydia Antonia. "Reliability and risk analysis of post fault capacity services in smart distribution networks". Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/reliability-and-risk-analysis-of-post-fault-capacity-services-in-smart-distribution-networks(b1a93b49-d307-4561-800d-0a9944a7a577).html.

Pełny tekst źródła
Streszczenie:
Recent technological developments are bringing about substantial changes that are converting traditional distribution networks into "smart" distribution networks. In particular, it is possible to observe seamless integration of Information and Communication Technologies (ICTs), including the widespread installation of automatic equipment, smart meters, etc. The increased automation facilitates active network management, interaction between market actors and demand side participation. If we also consider the increasing penetration of distributed generation, renewables and various emerging technologies such as storage and dynamic rating, it can be argued that the capacity of distribution networks should not only depend on conventional asset. In this context, taking into account uncertain load growth and ageing infrastructure, which trigger network investments, the above-mentioned advancements could alter and be used to improve the network design philosophy adopted so far. Hitherto, in fact, networks have been planned according to deterministic and conservative standards, being typically underutilised, in order for capacity to be available during emergencies. This practice could be replaced by a corrective philosophy, where existing infrastructure could be fully unlocked for normal conditions and distributed energy resources could be used for post fault capacity services. Nonetheless, to thoroughly evaluate the contribution of the resources and also to properly model emergency conditions, a probabilistic analysis should be carried out, which captures the stochasticity of some technologies, the randomness of faults and, thus, the risk profile of smart distribution networks. The research work in this thesis proposes a variety of post fault capacity services to increase distribution network utilisation but also to provide reliability support during emergency conditions. In particular, a demand response (DR) scheme is proposed where DR customers are optimally disconnected during contingencies from the operator depending on their cost of interruption. Additionally, time-limited thermal ratings have been used to increase network utilisation and support higher loading levels. Besides that, a collaborative operation of wind farms and electrical energy storage is proposed and evaluated, and their capacity contribution is calculated through the effective load carrying capability. Furthermore, the microgrid concept is examined, where multi-generation technologies collaborate to provide capacity services to internal customers but also to the remaining network. Finally, a distributed software infrastructure is examined which could be effectively used to support services in smart grids. The underlying framework for the reliability analysis is based on Sequential Monte Carlo Simulations, capturing inter-temporal constraints of the resources (payback effects, dynamic rating, DR profile, storage remaining available capacity) and the stochasticity of electrical and ICT equipment. The comprehensive distribution network reliability analysis includes network reconfiguration, restoration process, and ac power flow calculations, supporting a full risk analysis and building the risk profile for the arising smart distribution networks. Real case studies from ongoing project in England North West demonstrate the concepts and tools developed and provide noteworthy conclusions to network planners, including to inform design of DR contracts.
Style APA, Harvard, Vancouver, ISO itp.
9

Vannini, Alessandro. "Human Reliability Analysis for Dynamic Risk Assessment: a case of ammonia production plant". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Znajdź pełny tekst źródła
Streszczenie:
I fattori umani e organizzativi svolgono un ruolo chiave nella prevenzione e mitigazione di incidenti rilevanti. La Quantitative Risk Analysis (QRA), considera come cause di incidente solamente i fattori tecnici. Dunque è possibile integrare questa analisi con tecniche per Human Reliability Analysis (HRA), ma la loro applicazione è ancora limitata al settore nucleare. Inoltre, la staticità della QRA ha costruito le basi per la valutazione dinamica del rischio. Nel presente lavoro, è stato considerato un generico impianto di produzione di ammoniaca, per il quale è stato creato un database di incidenti e near misses che ha mostrato come i fattori umani rappresentano la seconda causa di incidenti. Successivamente è stato considerato come caso di studio rappresentativo la rottura catastrofica di un serbatoio di stoccaggio di ammoniaca avvenuta in Lituania nel 1989, al quale è stata eseguita una bow-tie analysis per identificare le cause della rottura e le barriere di sicurezza coinvolte. In seguito, tre metodi per l’analisi dei fattori umani e organizzativi sono stati applicati al caso di studio. Il metodo REWI (Early Warning Indicator), basato sul concetto di resilienza, stabilisce una serie di indicatori il cui monitoraggio periodico può contribuire alla gestione del rischio in maniera proattiva. Il metodo Petro-HRA è una tecnica innovativa per la Human Reliability Analysis sviluppata per l’industria petrolchimica. Essa fornisce un metodo sistematico per valutare i fattori umani e organizzativi attraverso una procedura dettagliata. Infine il metodo TECnical Operational and Organizational factors (TEC2O) per la valutazione dinamica del rischio. Questo metodo considera fattori tecnici, umani e organizzativi, combinando i vantaggi dei metodi HRA con le caratteristiche dinamiche e di resilienza della metodologia REWI. Il suo risultato mostra una valutazione del rischio più completa e realistica e consente di identificare le caratteristiche di ciascun metodo trattato.
Style APA, Harvard, Vancouver, ISO itp.
10

Shirley, Rachel B. "Science Based Human Reliability Analysis: Using Digital Nuclear Power Plant Simulators for Human Reliability Research". The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu149428353178302.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Dwire, Heather B. "RISK BASED ANALYSIS AND DESIGN OF STIFFENED PLATES". Wright State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=wright1208453129.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Mazumder, Ram Krishna. "Risk-Based Asset Management Framework for Water Distribution Systems". Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1594169243438607.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

O'Connor, Andrew N. "A general cause based methodology for analysis of dependent failures in system risk and reliability assessments". Thesis, University of Maryland, College Park, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3587283.

Pełny tekst źródła
Streszczenie:

Traditional parametric Common Cause Failure (CCF) models quantify the soft dependencies between component failures through the use of empirical ratio relationships. Furthermore CCF modeling has been essentially restricted to identical components in redundant formations. While this has been advantageous in allowing the prediction of system reliability with little or no data, it has been prohibitive in other applications such as modeling the characteristics of a system design or including the characteristics of failure when assessing the risk significance of a failure or degraded performance event (known as an event assessment).

This dissertation extends the traditional definition of CCF to model soft dependencies between like and non-like components. It does this through the explicit modeling of soft dependencies between systems (coupling factors) such as sharing a maintenance team or sharing a manufacturer. By modeling the soft dependencies explicitly these relationships can be individually quantified based on the specific design of the system and allows for more accurate event assessment given knowledge of the failure cause.

Since the most data informed model in use is the Alpha Factor Model (AFM), it has been used as the baseline for the proposed solutions. This dissertation analyzes the US Nuclear Regulatory Commission's Common Cause Failure Database event data to determine the suitability of the data and failure taxonomy for use in the proposed cause-based models. Recognizing that CCF events are characterized by full or partial presence of "root cause" and "coupling factor" a refined failure taxonomy is proposed which provides a direct link between the failure cause category and the coupling factors.

This dissertation proposes two CCF models (a) Partial Alpha Factor Model (PAFM) that accounts for the relevant coupling factors based on system design and provide event assessment with knowledge of the failure cause, and (b)General Dependency Model (GDM),which uses Bayesian Network to model the soft dependencies between components. This is done through the introduction of three parameters for each failure cause that relate to component fragility, failure cause rate, and failure cause propagation probability.

Style APA, Harvard, Vancouver, ISO itp.
14

He, Longxue Verfasser], i Michael [Akademischer Betreuer] [Beer. "Advanced Bayesian networks for reliability and risk analysis in geotechnical engineering / Longxue He ; Betreuer: Michael Beer". Hannover : Gottfried Wilhelm Leibniz Universität Hannover, 2020. http://nbn-resolving.de/urn:nbn:de:101:1-2020031901080232795085.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

He, Longxue [Verfasser], i Michael [Akademischer Betreuer] Beer. "Advanced Bayesian networks for reliability and risk analysis in geotechnical engineering / Longxue He ; Betreuer: Michael Beer". Hannover : Gottfried Wilhelm Leibniz Universität Hannover, 2020. http://d-nb.info/1206685883/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Wang, Ruoqi. "Reliability-based fatigue assessment of existing steel bridges". Licentiate thesis, KTH, Bro- och stålbyggnad, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281997.

Pełny tekst źródła
Streszczenie:
Fatigue is among the most critical forms of deterioration damage that occurs tosteel bridges. It causes a decline of the safety level of bridges over time. Therefore,the performance of steel bridges, which may be seriously affected by fatigue, shouldbe assessed and predicted. There are several levels of uncertainty involved in thecrack initiation and propagation process; therefore the probabilistic methods canprovide a better estimation of fatigue lives than deterministic methods. Whenthere are recurring similar details which may have correlation with each other andbe regarded as a system, there are distinct advantages to analyze them from asystem reliability perspective. It allows the engineer to identify the importance ofan individual detail or the interaction between details with respect to the overallperformance of the system. The main aim of this licentiate thesis is to evaluate probabilistic methods for reliabilityassessment of steel bridges, from both a single detail level and a systemlevel. For single details, an efficient simulation technique is desired. The widelyapplied Monte Carlo simulation method provides accurate estimation, however, isvery time-consuming. The Subset simulation method is investigated as an alternativeand it shows great feasibility in dealing with a multi-dimensional limit statefunction and nonlinear crack propagation. For larger systems, the spatial correlationis considered between details. An equicorrelation-based modelling approachhas been proposed as supplement to common simulation techniques to estimate thesystem reliability analytically and significantly reduce the simulation time. Withcorrelation considered, the information of one accessible detail could be used topredict the status of the system. While reliability analysis aims for a specific safety level, risk analysis aims to findthe most optimal solution. With consequences considered, a risk-based decisionsupport framework is formulated for the selected system, which is presented asa decision tree. It reveals that the decisions based on reliability assessment canbe different from those based on risk analysis, since they have different objectivecriteria.
Utmattning är en av de mest allvarliga nedbrytningsmekanismer som stålbroarutsätts för. Den orsakar en försämrad säkerhet för broar över tid. Därav måstestålbroars tillförlitlighet, som kan påverkas allvarligt på grund av utmattning, bedömasoch förutsägas. Flera olika nivåer av osäkerheter är involverade i initieringoch propagering av utmattningssprickor, varför sannolikhetsbaserade metoder kange en bättre uppskattning av utmattningslivslängden än deterministiska metoder.När liknande detaljer återkommer i en konstruktion och med korrelation mellanvarandra kan dessa betraktas som ett system, för vilket tillförlitlighetsmetoder påsystemnivå kan utnyttjas. Det gör det möjligt för ingenjören att identifiera betydelsenav en individuell detalj eller interaktionen mellan detaljer med avseende påsystemets totala tillförlitlighet. Det huvudsakliga syftet med denna licentiatuppsats är att utvärdera sannolikhetsbaserademetoder för uppskattning av stålbroars tillförlitlighet, både med avseendepå enskilda detaljer och på systemnivå. För enskilda detaljer eftersträvas en tidseffektivsimuleringsteknik. Den allmänt tillämpade Monte Carlo-simuleringsmetodenger en robust uppskattning, men är mycket tidskrävande. Subset-simuleringsmetodenundersöks som ett alternativ och den visar stor potential när det gäller att hanteraen flerdimensionell gränsfunktion och en olinjär sprickpropageringsmodell. På systemnivåbeaktas den rumsliga korrelationen mellan detaljer. En modelleringsmetodbaserad på konstant korrelation mellan detaljer har föreslagits som komplement tillvanliga simuleringstekniker för att uppskatta tillförlitligheten analytiskt och avsevärtminska simuleringstiden. Genom att utnyttja korrelationen kan informationom en tillgänglig detalj användas för att förutsäga systemets status. Medan en tillförlitlighetsanalys bedöms mot en specifik säkerhetsnivå används riskanalysenför att hitta den mest optimala åtgärden. Genom att beakta konsekvenserhar ett riskbaserat verktyg för beslutsstöd föreslagits och presenterats i form av ettbeslutsträd. Resultaten visar att besluten baserade på tillförlitlighet kan skilja sigfrån de som baseras på en uppskattad risk, eftersom metoderna har olika målfunktioner.

QC 20201007

Style APA, Harvard, Vancouver, ISO itp.
17

Wallnerström, Carl Johan. "On Risk Management of Electrical Distribution Systems and the Impact of Regulations". Licentiate thesis, KTH, Electromagnetic Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4717.

Pełny tekst źródła
Streszczenie:

The Swedish electricity market was de-regulated in 1996, followed by new laws and a new regulation applied to the natural monopolies of electrical distribution systems (EDS). These circumstances have motivated distribution systems operators (DSOs) to introduce more comprehensive analysis methods. The laws, the regulation and additional incentives have been investigated within this work and results from this study can be valuable when developing risk methods or other quantitative methods applied to EDS. This tendency is not unique for Sweden, the results from a comparative study of customer outage compensation laws between Sweden and UK is for example included.

As a part of investigating these incentives, studies of the Swedish regulation of customer network tariffs have been performed which provide valuable learning when developing regulation models in different countries. The Swedish regulatory model, referred to as the Network Performance Assessment Model (NPAM), was created for one of the first de-regulated electricity markets in the world and has a unique and novel approach. For the first time, an overall presentation of the NPAM has been performed including description of the underlying theory as a part of this work. However, the model has been met by difficulties and the future usage of the model is uncertain. Furthermore, the robustness of the NPAM has been evaluated in two studies with the main conclusion that the NPAM is sensitive toward small variations in input data. Results from these studies are explained theoretically investigating algorithms of the NPAM.

A pre-study of a project on developing international test systems is presented and this ongoing project aims to be a useful input when developing risk methods. An application study is included with the approach to systematically describe the overall risk management process at a DSO including an evaluation and ideas of future developments. The main objective is to support DSOs in the development of risk management, and to give academic reference material to utilize industry experience. An idea of a risk management classification has been concluded from this application study. The study provides an input to the final objective of a quantitative risk method.

Style APA, Harvard, Vancouver, ISO itp.
18

Zeng, Diqi. "Cyclone risk assessment of large-scale distributed infrastructure systems". Thesis, University of Sydney, 2021. https://hdl.handle.net/2123/24514.

Pełny tekst źródła
Streszczenie:
Coastal communities are vulnerable to tropical cyclones. Community resilience assessment for hazard mitigation planning demands a whole-of-community approach to risk assessment under tropical cyclones. Community risk assessment is complicated since it must capture the spatial correlation among individual facilities due to similar demands placed by a cyclone event and similar infrastructure capacities due to common engineering practices. However, the impact of such spatial correlation has seldom been considered in cyclone risk assessment. This study develops advanced stochastic models and methodology to evaluate the collective risk of large-scale distributed infrastructure systems under a scenario tropical cyclone, considering the spatial correlations of wind demands and structural capacities modelled by fragility functions. Wind-dependent correlation of fragility functions is derived from the correlation of structural resistances using joint fragility analysis. A general probabilistic framework is proposed to evaluate the damage of infrastructure systems based on joint fragility functions, where the stochastic dependence between the fragility functions of individual facilities is approximated by a Gaussian copula. A stochastic model is developed to model the spatially correlated wind speeds from a tropical cyclone, when wind speed statistics based on three cyclone wind field models of different complexity are examined. The impact of wind speed uncertainty and spatial correlation on risk assessment is investigated by evaluating the cyclone loss of an electric power system, when three loss metrics are examined including damage ratio, power outage ratio and outage cost to electricity customers. Since the risk assessment of a large-scale infrastructure system is computationally challenging, an interpolation technique based on random field discretization is developed, which can simulate spatially correlated damage to infrastructure components in a scalable manner.
Style APA, Harvard, Vancouver, ISO itp.
19

Jenelius, Erik. "Large-Scale Road Network Vulnerability Analysis". Doctoral thesis, KTH, Transport och lokaliseringsanalys, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-24952.

Pełny tekst źródła
Streszczenie:
Disruptions in the transport system can have severe impacts for affected individuals, businesses and the society as a whole. In this research, vulnerability is seen as the risk of unplanned system disruptions, with a focus on large, rare events. Vulnerability analysis aims to provide decision support regarding preventive and restorative actions, ideally as an integrated part of the planning process.The thesis specifically develops the methodology for vulnerability analysis of road networks and considers the effects of suddenly increased travel times and cancelled trips following road link closures. The major part consists of model-based studies of different aspects of vulnerability, in particular the dichotomy of system efficiency and user equity, applied to the Swedish road network. We introduce the concepts of link importance as the overall impact of closing a particular link, and regional exposure as the impact for individuals in a particular region of, e.g., a worst-case or an average-case scenario (Paper I). By construction, a link is important if the normal flow across it is high and/or the alternatives to this link are considerably worse, while a traveller is exposed if a link closure along her normal route is likely and/or the best alternative is considerably worse. Using regression analysis we show that these relationships can be generalized to municipalities and counties, so that geographical variations in vulnerability can be explained by variations in network density and travel patterns (Paper II). The relationship between overall impacts and user disparities are also analyzed for single link closures and is found to be negative, i.e., the most important links also have the most equal distribution of impacts among individuals (Paper III).In addition to links' roles for transport efficiency, the thesis considers their importance as rerouting alternatives when other links are disrupted (Paper IV). Such redundancy-important roads, found often to be running in parallel to highways with heavy traffic, may be warranted a higher standard than their typical use would suggest. We also study the vulnerability of the road network under area-covering disruptions, representing for example flooding, heavy snowfall or forest fires (Paper V). In contrast to single link failures, the impacts of this kind of events are largely determined by the population concentration, more precisely the travel demand within, in and out of the disrupted area itself, while the density of the road network is of small influence. Finally, the thesis approaches the issue of how to value the delays that are incurred by network disruptions and, using an activity-based modelling approach, we illustrate that these delay costs may be considerably higher than the ordinary value of time, in particular during the first few days after the event when travel conditions are uncertain (Paper VI).
QC 20101004
Style APA, Harvard, Vancouver, ISO itp.
20

RAMOS, Marilia Abílio. "A methodology for human reliability analysis of oil refinery and petrochemical operations: the hero (human error in refinery operations) hra methodology". Universidade Federal de Pernambuco, 2017. https://repositorio.ufpe.br/handle/123456789/24864.

Pełny tekst źródła
Streszczenie:
Submitted by Pedro Barros (pedro.silvabarros@ufpe.br) on 2018-06-20T22:54:11Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) TESE Marilia Abílio Ramos.pdf: 6997571 bytes, checksum: 1514e881a0919bde7d2b45038eed3a91 (MD5)
Made available in DSpace on 2018-06-20T22:54:11Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) TESE Marilia Abílio Ramos.pdf: 6997571 bytes, checksum: 1514e881a0919bde7d2b45038eed3a91 (MD5) Previous issue date: 2017-04-07
ANP (Agência Nacional do Petróleo)
Petrobras
The oil industry has grown in recent decades in terms of quantity of facilities and process complexity. However, human and material losses still occur due to major accidents at the facility. The analysis of these accidents reveals that many involve human failures that, if prevented, could avoid such accidents. These failures, in turn, can be identified, modeled and quantified through Human Reliability Analysis (HRA), which forms a basis for prioritization and development of safeguards for preventing or reducing the frequency of accidents. The most advanced and reliable HRA methods have been developed and applied in nuclear power plant operations, while the petroleum industry has usually applied Quantitative Risk Analysis (QRA) focusing on process safety in terms of technical aspects of the operation and equipment. This thesis demonstrates that the use of HRA in oil refining and petrochemical operations allows the identification and analysis of factors that can influence the behavior of operators as well as the potential human errors that can contribute to the occurrence of an accident. Existing HRA methodologies, however, were mainly developed for the nuclear industry. Thus, they may not reflect the specificities of refining and petrochemical plants regarding the interaction of the operators with the plant, the failure modes of the operators and the factors that influence their actions. Thus, this thesis presents an HRA methodology developed specifically for use in this industry, HERO - Human Error in Refinery Operations HRA Methodology. The Phoenix HRA methodology was used as a basis, which has three layers i) a crew response tree (CRT), which models the interaction between the crew and the plant; ii) a human response model, modeled through fault trees, that identifies the possible crew failures modes (CFMs); and (iii) "contextual factors" known as performance influencing factors (PIFs), modeled through Bayesian networks. In addition to building on such a structure, HERO's development relied on interviews with HRA specialists, visitations to a refinery and its control room, and analysis of past oil refineries accidents - four accidents were analyzed in detail. The methodology developed maintains the three-layer structure and has a guideline flowchart for the construction of the CRT, in order to model the team-plant interactions in oil refining and petrochemical operations; it also features CFMs and PIFs developed specifically for this industry, with definitions that make them easily relatable by an analyst. Finally, the methodology was applied to three potential accidental scenarios of refinery operations. In one of these scenarios, it was combined with a QRA to illustrate how an HRA can be applied to a traditional QRA and to demonstrate the influence of PIFs and of human error probability on the final risk. The use of this methodology for HRA of refineries and petrochemical plants operations can enhance this industry safety and allow for solid riskbased decisions.
A indústria de petróleo teve grande crescimento nas últimas décadas em termos de quantidade de instalações e complexidade de processo. No entanto, perdas humanas e materiais ainda ocorrem devido a acidentes graves nas instalações. A análise desses acidentes revela que muitos envolvem falhas humanas que poderiam ser prevenidas de forma a evitar tais acidentes. Estas falhas, por sua vez, podem ser identificadas, modeladas e quantificadas através da Análise de Confiabilidade Humana (ACH), que forma uma base para priorização e desenvolvimento de salvaguardas na prevenção ou redução da frequência de acidentes. Os métodos de ACH mais avançados e confiáveis têm sido desenvolvidos e aplicados nas operações de controle de plantas nucleares; já a indústria de petróleo tem usualmente aplicado a Análise Quantitativa de Risco (AQR) com foco na segurança de processo em termos técnicos da operação e equipamentos. Esta tese demonstra que o uso da ACH em operações de refino e petroquímica possibilita a identificação e análise dos fatores que podem influenciar o comportamento do operador bem como as potenciais falhas humanas que podem contribuir para a ocorrência de um acidente. As metodologias de ACH existentes, no entanto, foram desenvolvidas para a indústria nuclear. Desta forma, elas não refletem as especificidades de refino e petroquímica no que se refere à interação dos operadores com a planta, aos modos de falha dos operadores e aos fatores que influenciam suas ações. Assim, esta tese apresenta uma metodologia de ACH desenvolvida especificamente para uso nessa indústria, a HERO - Human Error in Refinery Operations HRA Methodology. Como base, utilizou-se a Metodologia Phoenix, que possui três camadas i) uma árvore de resposta da equipe (crew response tree - CRT), que modela a interação da equipe com a planta; ii) um modelo de resposta humana, modelado através de árvores de falhas, que identifica os possíveis modos de falhas da equipe (crew failures modes - CFMs); e iii) os “fatores contextuais” conhecidos como fatores de desempenho ou performance influencing factors (PIFs), modelados através de redes Bayesianas. Além de basear-se em tal estrutura, o desenvolvimento da HERO apoiou-se em entrevistas com especialistas em ACH, visitas a uma refinaria e sua sala de controle e na análise de estudos de acidentes passados em refinarias – foram analisados em detalhe quatro acidentes. A metodologia desenvolvida mantém a estrutura de três camadas e possui um fluxograma-guia para construção da CRT, de forma a modelar as interações equipe-planta na operação de refino e petroquímicas; ela também apresenta CFMs e PIFs desenvolvidos especificamente para esta indústria, com definições que os tornam facilmente identificáveis por um analista. Por fim, a metodologia foi aplicada a três cenários acidentais de operações de refinaria. Em um destes cenários, ela foi conjugada a uma AQR de forma a ilustrar como uma ACH pode ser aplicada a uma tradicional AQR e para demonstrar a influência dos PIFs e da Probabilidade de Erro Humano no risco final. Espera-se que o uso da metodologia proposta nesta tese poderá aumentar a segurança em refinarias e petroquímicas e permitir sólidas decisões baseadas no risco.
Style APA, Harvard, Vancouver, ISO itp.
21

Zhu, Weiqi, i ycqq929@gmail com. "An Investigation into Reliability Based Methods to Include Risk of Failure in Life Cycle Cost Analysis of Reinforced Concrete Bridge Rehabilitation". RMIT University. Civil, Environmental and Chemical Engineering, 2008. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080822.140447.

Pełny tekst źródła
Streszczenie:
Reliability based life cycle cost analysis is becoming an important consideration for decision-making in relation to bridge design, maintenance and rehabilitation. An optimal solution should ensure reliability during service life while minimizing the life cycle cost. Risk of failure is an important component in whole of life cycle cost for both new and existing structures. Research work presented here aimed to develop a methodology for evaluation of the risk of failure of reinforced concrete bridges to assist in decision making on rehabilitation. Methodology proposed here combines fault tree analysis and probabilistic time-dependent reliability analysis to achieve qualitative and quantitative assessment of the risk of failure. Various uncertainties are considered including the degradation of resistance due to initiation of a particular distress mechanism, increasing load effects, changes in resistance as a result of rehabilitation, environmental variables, material properties and model errors. It was shown that the proposed methodology has the ability to provide users two alternative approaches for qualitative or quantitative assessment of the risk of failure depending on availability of detailed data. This work will assist the managers of bridge infrastructures in making decisions in relation to optimization of rehabilitation options for aging bridges.
Style APA, Harvard, Vancouver, ISO itp.
22

Braik, Abdullah Mousa Darwish. "RELIABILITY AND COST ANALYSIS OF POWER DISTRIBUTION SYSTEMS SUBJECTED TO TORNADO HAZARD". Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case1543584694806575.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Beauchamp, Nicolas. "Methods for estimating reliability of water treatment processes : an application to conventional and membrane technologies". Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2434.

Pełny tekst źródła
Streszczenie:
Water supply systems aim, among other objectives, to protect public health by reducing the concentration of, and potentially eliminating, microorganisms pathogenic to human beings. Yet, because water supply systems are engineered systems facing variable conditions, such as raw water quality or treatment process performance, the quality of the drinking water produced also exhibits variability. The reliability of a treatment system is defined in this context as the probability of producing drinking water that complies with existing microbial quality standards. This thesis examines the concept of reliability for two physicochemical treatment technologies, conventional rapid granular filtration and ultrafiltration, used to remove the protozoan pathogen Cryptosporidium parvum from drinking water. First, fault tree analysis is used as a method of identifying technical hazards related to the operation of these two technologies and to propose ways of minimizing the probability of failure of the systems. This method is used to compile operators’ knowledge into a single logical diagram and allows the identification of important processes which require efficient monitoring and maintenance practices. Second, an existing quantitative microbial risk assessment model is extended to be used in a reliability analysis. The extended model is used to quantify the reliability of the ultrafiltration system, for which performance is based on full-scale operational data, and to compare it with the reliability of rapid granular filtration systems, for which performance is based on previously published data. This method allows for a sound comparison of the reliability of the two technologies. Several issues remain to be addressed regarding the approaches used to quantify the different input variables of the model. The approaches proposed herein can be applied to other water treatment technologies, to aid in prioritizing interventions to improve system reliability at the operational level, and to determine the data needs for further refinements of the estimates of important variables.
Style APA, Harvard, Vancouver, ISO itp.
24

Gonçalves, Arnaldo. "Um estudo da implementação da FMEA (failure mode and effects analysis) sob a otica de gerenciamento de projetos". [s.n.], 2006. http://repositorio.unicamp.br/jspui/handle/REPOSIP/264207.

Pełny tekst źródła
Streszczenie:
Orientador: Olivio Novaski
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica
Made available in DSpace on 2018-08-07T00:10:30Z (GMT). No. of bitstreams: 1 Goncalves_Arnaldo_M.pdf: 2118819 bytes, checksum: 0c6234f8bc0c2d10c55247144bce90c4 (MD5) Previous issue date: 2006
Resumo: A busca contínua pela melhoria de desempenho de produtos, processos, sistemas e serviços, têm obrigado as organizações a experimentar metodologias que gerem melhores índices de desempenho. Aspectos relativos a prazos, custos, qualidade, flexibilidade e confiabilidade são vitais para garantir um diferencial que permita a sua sobrevivência no mercado. A técnica FMEA (Failure Mode and Effect Anaíysis), pela sua relevância em catalisar os processos de entrada (inputs) e os processos de saída {outputs) dos sistemas modernos de administração da ualidade, é vital para o sucesso dos mesmos. A implantação eficaz da FMEA é complexa devido à multidisciplinaridade e às muitas interações necessárias entre os processos, para assegurar que os requisitos dos clientes sejam transformados em características do produto ou serviço. Os objetivos deste trabalho foram: (i) em primeira instância, estudar o estado da arte da técnica FMEA, verificando as interfaces necessárias que garantam a sua efetividade em um sistema de garantia da qualidade e (ii) aplicar e avaliar a contribuição da metodologia de Gerenciamento de Projetos na implantação de uma FMEA, considerando-a como um projeto. O estudo de caso foi realizado em uma empresa do setor automotivo, definindo e monitorando a eficiência da FMEA caracterizada por sessões produtivas e em tempo certo, e a sua eficácia representada pela influência nos custos da qualidade, conformidade do produto e satisfação dos clientes. Os resultados positivos e expressivos obtidos desde a aplicação dos processos do gerenciamento e projetos encorajam o uso desta abordagem na implantação da FMEA para ampliar a sua efetividade
Abstract: The continuous seeking for improvements in products, processes, systems and services, stressed by the fast growing competition, has led the organizations to experiment methodologies which can improve performance figures. Aspects related to costs, timing, quality, flexibility and reliability, are strategic in assuring a differential to survive in the business, with higher competitiviry. These demands oblige the organizations to consider more integration among areas, transcending the technical character to a more holistic approach. The FMEA methodology by providing a linking among a quality management system input and output process is considered by many quality management systems, mandatory and of high relevance. The FMEA implementation is quite complex as involves effective interaction among distinct elements, to assure the customers needs fulfilling through the product or service characteristics. The aims of this work were: (i) in the first instance, to study the state of the art of the FMEA technique, by checking the strategic interfaces with other tools to assure its effectiveness under a quality management system and (ii) to apply and evaluate the contribution of Project Management methodology in the implementation of a FMEA, focusing it as a project. A case study was made in an automotive parts industry, defining and monitoring the FMEA efficiency characterized by productive and in time sessions as well as its efficacy, represented by its influence in quality costs, products conformance and customer satisfaction. The positive and significant results obtained since the application of the new project management processes, encourage the use of this approach in the FMEA implementation to boost its effectiveness
Mestrado
Engenharia de Fabricação
Mestre em Engenharia Mecânica
Style APA, Harvard, Vancouver, ISO itp.
25

Ng, Anthony Kwok-Lung. "Risk Assessment of Transformer Fire Protection in a Typical New Zealand High-Rise Building". Thesis, University of Canterbury. Civil Engineering, 2007. http://hdl.handle.net/10092/1223.

Pełny tekst źródła
Streszczenie:
Prescriptively, the requirement of fire safety protection systems for distribution substations is not provided in the compliance document for fire safety to the New Zealand Building Code. Therefore, the New Zealand Fire Service (NZFS) has proposed a list of fire safety protection requirements for distribution substations in a letter, dated 10th July 2002. A review by Nyman [1], has considered the fire safety requirements proposed by the NZFS and discussed the issues with a number of fire engineers over the last three years. Nyman concerned that one of the requirements regarding the four hour fire separation between the distribution substation and the interior spaces of the building may not be necessary when considering the risk exposure to the building occupants in different situations, such as the involvement of the sprinkler systems and the use of transformers with a lower fire hazard. Fire resistance rating (FRR) typically means the time duration for which passive fire protection system, such as fire barriers, fire walls and other fire rated building elements, can maintain its integrity, insulation and stability in a standard fire endurance test. Based on the literature review and discussions with industry experts, it is found that failure of the passive fire protection system in a real fire exposure could potentially occur earlier than the time indicated by the fire resistance rating derived from the standard test depending on the characteristics of the actual fire (heat release rate, fire load density and fire location) and the characteristics of the fire compartment (its geometric, ventilation conditions, opening definition, building services and equipment). Hence, it is known that a higher level of fire safety, such as 4 hour fire rated construction and use of sprinkler system, may significantly improve the fire risk to health of safety of occupants in the building; however, they could never eliminate the risk. This report presents a fire engineering Quantitative Risk Assessment (QRA) on a transformer fire initiating in a distribution substation inside a high-rise residential and commercial mixeduse building. It compares the fire safety protection requirements for distribution substations from the NZFS to other relevant documents worldwide: the regulatory standards in New Zealand, Australia and United States of America, as well as the non-regulatory guidelines from other stakeholders, such as electrical engineering organisation, insurance companies and electricity providers. This report also examines the characteristics of historical data for transformer fires in distribution substations both in New Zealand and United States of America buildings. Reliability of active fire safety protection systems, such as smoke detection systems and sprinkler systems is reviewed in this research. Based on the data analysis results, a fire risk estimate is determined using an Event Tree Analysis (ETA) for a total of 14 scenarios with different fire safety designs and transformer types for a distribution substation in a high-rise residential and commercial mixed-use building. In Scenario 1 to 10 scenarios, different combinations of fire safety systems are evaluated with the same type of transformer, Flammable liquid (mineral oil) insulated transformer. In Scenario 11 to Scenario 14, two particular fire safety designs are selected as a baseline for the analysis of transformer types. Two types of transformer with a low fire hazard are used to replace the flammable liquid (mineral oil) insulated transformer in a distribution substation. These are less flammable liquid (silicone oil) insulated transformers and dry type (dry air) transformers. The entire fire risk estimate is determined using the software package @Risk4.5. The results from the event tree analysis are used in the cost-benefit analysis. The cost-benefit ratios are measured based on the reduced fire risk exposures to the building occupants, with respect to the investment costs of the alternative cases, from its respective base case. The outcomes of the assessment show that the proposed four hour fire separation between the distribution substations and the interior spaces of the building, when no sprinkler systems are provided, is not considered to be the most cost-effective alternative to the life safety of occupants, where the cost-benefit ratio of this scenario is ranked fifth. The most cost-effective alternative is found to be the scenario with 30 minute fire separation and sprinkler system installed. In addition to the findings, replacing a flammable liquid insulated transformer with a less flammable liquid insulated transformer or a dry type transformer is generally considered to be economical alternatives. From the QRA analysis, it is concluded that 3 hour fire separation is considered to be appropriate for distribution substations, containing a flammable liquid insulated transformer and associated equipment, in non-sprinklered buildings. The fire ratings of the separation construction can be reduced to 30 minute FRR if sprinkler system is installed. This conclusion is also in agreement with the requirements of the National Fire Protection Association (NFPA).
Style APA, Harvard, Vancouver, ISO itp.
26

Setréus, Johan. "On Reliability Methods Quantifying Risks to Transfer Capability in Electric Power Transmission Systems". Licentiate thesis, KTH, Electromagnetic Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-10258.

Pełny tekst źródła
Streszczenie:

In the operation, planning and design of the transmission system it is of greatest concern to quantify the reliability security margin to unwanted conditions. The deterministic N-1 criterion has traditionally provided this security margin to reduce the consequences of severe conditions such as widespread blackouts. However, a deterministic criterion does not include the likelihood of different outage events. Moreover, experience from blackouts shows, e.g. in Sweden-Denmark September 2003, that the outages were not captured by the N-1 criterion. The question addressed in this thesis is how this system security margin can be quantified with probabilistic methods. A quantitative measure provides one valuable input to the decision-making process of selecting e.g. system expansions alternatives and maintenance actions in the planning and design phases. It is also beneficial for the operators in the control room to assess the associated security margin of existing and future network conditions.

This thesis presents a method that assesses each component's risk to an insufficient transfer capability in the transmission system. This shows on each component's importance to the system security margin. It provides a systematic analysis and ranking of outage events' risk of overloading critical transfer sections (CTS) in the system. The severity of each critical event is quantified in a risk index based on the likelihood of the event and the consequence of the section's transmission capacity. This enables a comparison of the risk of a frequent outage event with small CTS consequences, with a rare event with large consequences.

The developed approach has been applied for the generally known Roy Billinton Test System (RBTS). The result shows that the ranking of the components is highly dependent on the substation modelling and the studied system load level.

With the restriction of only evaluating the risks to the transfer capability in a few CTSs, the method provides a quantitative ranking of the potential risks to the system security margin at different load levels. Consequently, the developed reliability based approach provides information which could improve the deterministic criterion for transmission system planning.

Style APA, Harvard, Vancouver, ISO itp.
27

Valenzuela-Beltrán, Federico, Sonia Ruiz, Alfredo Reyes-Salazar i J. Gaxiola-Camacho. "On the Seismic Design of Structures with Tilting Located within a Seismic Region". MDPI AG, 2017. http://hdl.handle.net/10150/626403.

Pełny tekst źródła
Streszczenie:
A reliability-based criterion to estimate strength amplification factors for buildings with asymmetric yielding located within a seismic region presenting different soil conditions is proposed and applied. The approach involves the calculation of the mean annual rate of exceedance of structural demands of systems with different levels of asymmetric yielding. Two simplified mathematical expressions are developed considering different soil conditions of the valley of Mexico. The mathematical expressions depend on the ductility of the structural systems, their level of asymmetric yielding, their fundamental vibration period and the dominant period of the soil. In addition, the proposed expressions are compared with that recommended by the current Mexico City Building Code (MCBC). Since the expressions are developed with the help of simplified structural systems, the validity of such expressions is corroborated by comparing the expected ductility demand of multi-degree of freedom (MDOF) structural systems with respect to that of their equivalent simplified systems. Both structural representations are associated with a given annual rate of exceedance value of an engineering demand parameter. The expressions proposed in this study will be incorporated in the new version of the MCBC.
Style APA, Harvard, Vancouver, ISO itp.
28

El, Khoury John. "Accounting for Risk and Level of Service in the Design of Passing Sight Distances". Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/29805.

Pełny tekst źródła
Streszczenie:
Current design methods in transportation engineering do not simultaneously address the levels of risk and service associated with the design and use of various highway geometric elements. Passing sight distance (PSD) is an example of a geometric element designed with no risk measures. PSD is provided to ensure the safety of passing maneuvers on two-lane roads. Many variables decide the minimum length required for a safe passing maneuver. These are random variables and represent a wide range of human and vehicle characteristics. Also, current PSD design practices replace these random variables by single-value means in the calculation process, disregarding their inherent variations. The research focuses on three main objectives. The first goal is to derive a PSD distribution that accounts for the variations in the contributing parameters. Two models are devised for this purpose, a Monte-Carlo simulation model and a closed form analytical estimation model. The results of both models verify each other and differ by less than 5 percent. Using the PSD distribution, the reliability index of the current PSD criteria are assessed. The second goal is to attach risk indices to the various PSD lengths of the obtained distribution. A unique microscopic simulation is devised to replicate passing maneuvers on two-lane roads. Using the simulation results, the author is able to assess the risk of various PSD lengths for a specific design speed. The risk index of the AASHTO Green Book and the MUTCD PSD standards are also obtained using simulation. With risk measures attached to the PSD lengths, a trade-off analysis between the level of service and risk is feasible to accomplish. The last task is concerned with applying the Highway Capacity Manual concepts to assessing the service measures of the different PSD lengths. The results of the final trade-off analysis show that for a design speed of 50 mph, the AASHTO Green Book and the MUTCD standards overestimate the PSD requirements. The criteria can be reduced to 725 ft and still be within an acceptable risk level.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
29

Hong, William. "Aplicação do método de análise de risco ao estudo do descarrilamento". Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/3/3151/tde-20072011-094405/.

Pełny tekst źródła
Streszczenie:
Este trabalho propõe um método de análise de risco aplicada ao descarrilamento (incidente no qual a roda perde a sustentação provida pelo trilho, podendo ser causado por diversos aspectos como imperfeições na via, falhas no material rodante, obstáculos na via, entre outras que pode acarretar possíveis acidentes e perdas materiais e humanas) de forma a tornar mais seguro o transporte ferroviário, que atualmente não apresenta diminuição da taxa de ocorrência dos descarrilamentos, complementando assim as simulações computacionais e simulações dinâmicas que podem ser aplicadas ao estudo deste evento. Risco pode ser definido como o potencial de perda resultante da exposição a um perigo, sendo relacionado à probabilidade de ocorrência de um evento ou combinação de eventos acarretando em um perigo e a conseqüência deste perigo. Este conceito pode ser utilizado para investigar e avaliar as incertezas associadas com um evento. Já Confiabilidade pode ser definida como a probabilidade de um item executar a sua função sob condições pré-definidas de uso e manutenção por um período de tempo específico. Assim, considerando estes dois conceitos, será apresentada uma metodologia de análise de risco e confiabilidade para análise e discussão do descarrilamento, discorrendo sobre os possíveis parâmetros que podem causar este evento bem como propondo uma alternativa para avaliação da probabilidade de ocorrência do descarrilamento; desta forma permite guiar o gerenciamento da segurança quanto a este evento já que no Brasil não existe a figura da autoridade ferroviária, órgão máximo e responsável final pela regulamentação para a operação de um sistema ferroviário, que poderia determinar o processo que deve ser seguido para a garantia de segurança. Os objetos de estudo serão veículos ferroviários e conseqüentemente os elementos de interface com este tipo de veículo, como por exemplo, os elementos de via.
This research proposes a risk analysis method applied to derailment event (characterized by the wheel overlap on the rail, which can be caused by many aspects: rail imperfections, rolling stock failures, obstacles etc and which can cause accidents, material and life loss) to increase the safety level on railway transport that actually does not present decrease of derailment rate. This method also complements computational and dynamic simulations, which can be applied to this event. Risk can be defined as the potential loss due to a hazard exposure, also related with the probability of occurrence of an event or combinations of events leading to a hazard and the consequence of this hazard. This concept can be applied to investigate and to evaluate the uncertainties related with this event. Reliability can be defined as the probability of an item to perform its function under predefined use and maintenance conditions during a specific period of time. Thus, considering these two concepts, it will be presented a risk and reliability analysis to study the derailment event, discoursing about the possible parameters that can cause this event and proposing alternatives to evaluate the derailment occurrence probability in order to guide safety management since a railway authority does not exist in Brazil (body with the overall accountability to a regulator for operation a railway system, that could determines the process to be followed to assure safety levels). This research will cover railway vehicles and consequently the interface, for example, the railroad elements.
Style APA, Harvard, Vancouver, ISO itp.
30

Jane, Robert. "Improving the representation of the fragility of coastal structures". Thesis, University of Plymouth, 2018. http://hdl.handle.net/10026.1/13080.

Pełny tekst źródła
Streszczenie:
Robust Flood Risk Analysis (FRA) is essential for effective flood risk management. The performance of any flood defence assets will heavily influence the estimate of an area's flood risk. It is therefore critical that the probability of a coastal flood defence asset incurring a structural failure when subjected to a particular loading i.e. its fragility is accurately quantified. The fragility representations of coastal defence assets presently adopted in UK National FRA (NaFRA) suffer three pertinent limitations. Firstly, assumptions relating to the modelling of the dependence structure of the variables that comprise the hydraulic load, including the water level, wave height and period, are restricted to a single loading variable. Consequently, due to the "system wide" nature of the analysis, a defence's conditional failure probability must also be expressed in terms of a single loading in the form of a fragility curve. For coastal defences the single loading is the overtopping discharge, an amalgamation of these basic loading variables. The prevalence of other failure initiation mechanisms may vary considerably for combinations of the basic loadings which give rise to equal overtopping discharges. Hence the univariate nature of the existing representations potentially restricts their ability to accurately assess an asset's structural vulnerability. Secondly, they only consider failure at least partially initiated through overtopping and thus neglect other pertinent initiation mechanisms acting in its absence. Thirdly, fragility representations have been derived for 61 generic assets (idealised forms of the defences found around the UK coast) each in five possible states of repair. The fragility representation associated with the generic asset and its state of repair deemed to most closely resemble a particular defence is adopted to describe its fragility. Any disparity in the parameters which influence the defence's structural vulnerability in the generic form of the asset and those observed in the field are also likely to further reduce the robustness of the existing fragility representations. In NaFRA coastal flood defence assets are broadly classified as vertical walls, beaches and embankments. The latter are typically found in sheltered locations where failure is water level driven and hence expressing failure probability conditionally on overtopping is admissible. Therefore new fragility representations for vertical wall and gravel beach assets which address the limitations of those presently adopted in NaFRA are derived. To achieve this aim it was necessary to propose new procedures for extracting information on the site and structural parameters characterising a defence's structural vulnerability from relevant resources (predominately beach profiles). In addition novel statistical approaches were put forward for capturing the uncertainties in the parameters on the basis of the site specific data obtained after implementation of the aforementioned procedures. A preliminary validation demonstrated the apparent reliability of these approaches. The pertinent initiation mechanisms behind the structural failure of each asset type were then identified before the state-of-the-art models for predicting the prevalence of these mechanisms during an event were evaluated. The Obhrai et al. (2008) re-formulation of the Bradbury (2000) barrier inertia model, which encapsulates all of the initiating mechanisms behind the structural failure of a beach, was reasoned as a more appropriate model for predicting the breach of a beach than that adopted in NaFRA. Failure initiated exclusively at the toe of a seawall was explicitly accounted for in the new formulations of the fragility representations using the predictors for sand and shingle beaches derived by Sutherland et al. (2007) and Powell & Lowe (1994). In order to assess whether the new formulations warrant a place in future FRAs they were derived for the relevant assets in Lyme Bay (UK). The inclusion of site specific information in the derivation of fragility representations resulted in a several orders of magnitude change in the Annual Failure Probabilities (AFPs) of the vertical wall assets. The assets deemed most vulnerable were amongst those assigned the lowest AFPs in the existing analysis. The site specific data indicated that the crest elevations assumed in NaFRA are reliable. Hence it appears the more accurate specification of asset geometry and in particular the inclusion of the beach elevation in the immediate vicinity of the structure in the overtopping calculation is responsible for the changes. The AFP was zero for many of the walls (≈ 77%) indicating other mechanism(s) occurring in the absence of any overtopping are likely to be responsible for failure. Toe scour was found to be the dominant failure mechanism at all of the assets at which it was considered a plausible cause of breach. Increases of at least an order of magnitude upon the AFP after the inclusion of site specific information in the fragility representations were observed at ≈ 86% of the walls. The AFPs assigned by the new site specific multivariate fragility representations to the beach assets were positively correlated with those prescribed by the existing representations. However, once the new representations were adopted there was substantially more variability in AFPs of the beach assets which had previously been deemed to be in identical states of repair. As part of the work, the new and existing fragility representations were validated at assets which had experienced failure or near-failure in the recent past, using the hydraulic loading conditions recorded during the event. No appraisal of the reliability of the new representations for beaches was possible due to an absence of any such events within Lyme Bay. Their AFPs suggest that armed with more information about an asset's geometry the new formulations are able to provide a more robust description of a beach's structural vulnerability. The results of the validation as well as the magnitude of the AFPs assigned by the new representations on the basis of field data suggest that the newly proposed representations provide the more realistic description of the structural vulnerability of seawalls. Any final conclusions regarding the robustness of the representations must be deferred until more failure data becomes available. The trade-off for the potentially more robust description of an asset's structural vulnerability was a substantial increase in the time required for the newly derived fragility representations to compute the failure probability associated with a hydraulic loading event. To combat this increase, (multivariate) generic versions of the new representations were derived using the structural specific data from the assets within Lyme Bay. Although there was generally good agreement in the failure probabilities assigned to the individual hydraulic loading events by the new generic representations there was evidence of systematic error. This error has the potential to bias flood risk estimates and thus requires investigation before the new generic representations are included in future FRAs. Given the disparity in the estimated structural vulnerability of the assets according to the existing fragility curves and the site-specific multivariate representations the new generic representations are likely to be more reliable than the existing fragility curves.
Style APA, Harvard, Vancouver, ISO itp.
31

Azizsoltani, Hamoon, i Hamoon Azizsoltani. "Risk Estimation of Nonlinear Time Domain Dynamic Analyses of Large Systems". Diss., The University of Arizona, 2017. http://hdl.handle.net/10150/624545.

Pełny tekst źródła
Streszczenie:
A novel concept of multiple deterministic analyses is proposed to design safer and more damage-tolerant structures, particularly when excited by dynamic including seismic loading in time domain. Since the presence of numerous sources of uncertainty cannot be avoided or overlooked, the underlying risk is estimated to compare design alternatives. To generate the implicit performance functions explicitly, the basic response surface method is significantly improved. Then, several surrogate models are proposed. The advanced factorial design and Kriging method are used as the major building blocks. Using these basic schemes, seven alternatives are proposed. Accuracies of these schemes are verified using basic Monte Carlo simulations. After verifying all seven alternatives, the capabilities of the three most desirable schemes are compared using a case study. They correctly identified and correlated damaged states of structural elements in terms of probability of failure using only few hundreds of deterministic analyses. The modified Kriging method appears to be the best technique considering both efficiency and accuracy. Estimating the probability of failure, the post-Northridge seismic design criteria are found to be appropriate. After verifying the proposed method, a Site-Specific seismic safety assessment method for nonlinear structural systems is proposed to generate a suite of ground excitation time histories. The information of risk is used to design a structure more damage-tolerant. The proposed procedure is verified and showcased by estimating risks associated with three buildings designed by professional experts in the Los Angeles area satisfying the post-Northridge design criteria for the overall lateral deflection and inter-story drift. The accuracy of the estimated risk is again verified using the Monte Carlo simulation technique. In all cases, the probabilities of collapse are found to be less than 10% when excited by the risk-targeted maximum considered earthquake ground motion satisfying the intent of the code. The spread in the reliability indexes for each building for both limit states cannot be overlooked, indicating the significance of the frequency contents. The inter story drift is found to be more critical than the overall lateral displacement. The reliability indexes for both limit states are similar only for few cases. The author believes that the proposed methodology is an alternative to the classical random vibration and simulation approaches. The proposed site-specific seismic safety assessment procedure can be used by practicing engineers for routine applications. The proposed reliability methodology is not problem-specific. It is capable of handling systems with different levels of complexity and scalability, and it is robust enough for multi-disciplinary routine applications. In order to show the multi-disciplinary application of the proposed methodology, the probability of failure of lead-free solders in Ball Grid Array 225 surface-mount packaging for a given loading cycle is estimated. The accuracy of the proposed methodology is verified with the help of Monte Carlo simulation. After the verification, probability of failure versus loading cycles profile is calculated. Such a comprehensive study of its lifetime behavior and the corresponding reliability analyses can be useful for sensitive applications.
Style APA, Harvard, Vancouver, ISO itp.
32

Wilcox, Matthew Porter. "Evidence for the Validity of the Student Risk Screening Scale in Middle School: A Multilevel Confirmatory Factor Analysis". BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/6599.

Pełny tekst źródła
Streszczenie:
The Student Risk Screening Scale—Internalizing/Externalizing (SRSS-IE) was developed to screen elementary-aged students for Emotional and Behavioral Disorders (EBD). Its use has been extended to middle schools with little evidence that it measures the same constructs as in elementary schools. Scores of a middle school population from the SRSS-IE are analyzed with Multilevel Confirmatory Factor Analysis (MCFA) to examine its factor structure, factorial invariance between females and males, and its reliability. Several MCFA models are specified, and compared, with two retained for further analysis. The first model is a single-level model with chi-square and standard errors adjusted for the clustered nature of the data. The second model is a two-level model. Both support the hypothesized structure found in elementary populations of two factors (Externalizing and Internalizing). All items load on only one factor except Peer Rejection, which loads on both. Reliability is estimated for both models using several methods, which result in reliability coefficients ranging between .89-.98. Both models also show evidence of Configural, Metric, and Scalar invariance between females and males. While more research is needed to provide other kinds of evidence of validity in middle school populations, results from this study indicate that the SRSS-IE is an effective screening tool for EBD.
Style APA, Harvard, Vancouver, ISO itp.
33

SCOZZESE, FABRIZIO. "AN EFFICIENT PROBABILISTIC FRAMEWORK FOR SEISMIC RISK ANALYSIS OF STRUCTURAL SYSTEMS EQUIPPED WITH LINEAR AND NONLINEAR VISCOUS DAMPERS". Doctoral thesis, Università degli Studi di Camerino, 2018. http://hdl.handle.net/11581/429547.

Pełny tekst źródła
Streszczenie:
Seismic passive protection with supplemental damping devices represents an efficient strategy to produce resilient structural systems with improved seismic performances and notably reduced post-earthquake consequences. Such strategy offers indeed several advantages with respect to the ordinary seismic design philosophy: structural damages are prevented; the safety of the occupants is ensured and the system remains operational both during and right after the earthquake; no major retrofit interventions are needed but only a post-earthquake inspection (and if necessary, replacement) of dissipation devices is required; a noticeable reduction of both direct and indirect outlays is achieved. However, structural systems equipped with seismic control devices (dampers) may show potentially limited robustness, since an unexpected early disruption on the dampers may lead to a progressive collapse of the actually non-ductile system. Although the most advanced international seismic codes acknowledge this issue and require dampers to have higher safety margins against the failure, they only provide simplified approaches to cope with the problem, often consisting of general demand amplification rules which are not tailored on the actual needs of different device typologies and which lead to reliability levels not explicitly declared. The research activity carried out within this Thesis stems from the need to fill the gaps still present in the international regulatory framework, and respond to the scarcity of specific probabilistic studies geared to characterize and understand the probabilistic seismic response of such systems up to very low failure probabilities. In particular, as a first step towards this goal, the present work aims at addressing the issue of the seismic risk of structures with fluid viscous dampers, a simple and widely used class of dissipation devices. A robust probabilistic framework has been defined for the purposes of the present work, made up of the combination of an advanced probabilistic tool for solving reliability problems, consisting of Subset Simulation (with Markov chain Monte Carlo and Metropolis-like algorithms), and a stochastic ground motion model for statistical seismic hazard characterization. The seismic performance of the system is described by means of demand hazard curves, providing the mean annual frequency of exceeding any specified threshold demand value for all the relevant global and local Engineering Demand Parameters (EDPs). A wide range of performance levels is monitored, encompassing the serviceability conditions, the ultimate limit states, up to very rare performance demand levels (with mean annual frequency of exceedance around 10-6) at which the seismic reliability shall be checked in order to confer the system an adequate level of safety margins against seismic events rarer than the design one. Some original contributions regarding the methodological approaches have been obtained by an efficient combination of the common conditional probabilistic methods (i.e., multiple-stripe and cloud analysis) with a stochastic earthquake model, in which subset simulation is exploited for efficiently generate both the seismic hazard curve and the ground motion samples for structural analysis purposes. The accuracy of the proposed strategy is assessed by comparing the achieved seismic risk estimates with those provided via Subset Simulation, the latter being assumed as reference reliability method. Furthermore, a reliability-based optimization method is proposed as powerful tool for investigating upon the seismic risk sensitivity to variable model parameters. Such method proves to be particularly useful when a proper statistical characterization of the model parameters is not available. The proposed probabilistic framework is applied to a set of single-degree-of-freedom damped models to carry out an extensive parametric analysis, and to a multi-story steel building with linear and nonlinear viscous dampers for the aims of a deeper investigation. The influence of viscous dampers nonlinearity level on the seismic risk of such systems is investigated. The variability of viscous constitutive parameters due to the tolerance allowed in devices’ quality control and production tests is also accounted for, and the consequential effects on the seismic performances are evaluated. The reliability of simplified approaches proposed by the main international seismic codes for dampers design is assessed, the main regulatory gaps are highlighted and proposals for improvement are given as well. Results from this whole probabilistic investigation contribute to the development of more reliable design procedures for seismic passive protection strategies.
Style APA, Harvard, Vancouver, ISO itp.
34

Rangra, Subeer. "Performance shaping factor based human reliability assessment using valuation-based systems : application to railway operations". Thesis, Compiègne, 2017. http://www.theses.fr/2017COMP2375/document.

Pełny tekst źródła
Streszczenie:
L'homme reste l'un des éléments essentiels des opérations de transport modernes. Les méthodes d'analyse de la fiabilité humaine (HRA) fournissent une approche multidisciplinaire pour évaluer l'interaction entre les humains et le système. Cette thèse propose une nouvelle méthodologie HRA appelée PRELUDE (Performance shaping factor based human REliability assessment using vaLUation-baseD systems). Les facteurs de performance sont utilisés pour caractériser un contexte opérationnel dangereux. Le cadre de la théorie des fonctions de croyance et des systèmes d'évaluation (VBS) utilise des règles mathématiques pour formaliser l'utilisation de données d'experts et la construction d'un modèle de fiabilité humaine, il est capable de représenter toutes sortes d'incertitudes. Pour prédire la probabilité d'erreur humaine dans un contexte donné, et de fournir une remontée formelle pour réduire cette probabilité. La deuxième partie de ce travail démontre la faisabilité de PRELUDE avec des données empiriques. Un protocole pour obtenir des données à partir de simulateurs, et une méthode de transformation et d'analyse des données sont présentés. Une campagne expérimentale sur simulateur est menée pour illustrer la proposition. Ainsi, PRELUDE est en mesure d'intégrer des données provenant de sources (empiriques et expertes) et de types (objectifs et subjectifs) différents. Cette thèse aborde donc le problème de l'analyse des erreurs humaines, en tenant compte de l'évolution du domaine des méthodes HRA. Elle garde la facilité d'utilisation de l'industrie ferroviaire, fournissant des résultats qui peuvent facilement être intégrés avec les analyses de risques traditionnelles. Dans un monde de plus en plus complexe et exigeant, PRELUDE fournira aux opérateurs ferroviaires et aux autorités réglementaires une méthode permettant de s'assurer que le risque lié à l'interaction humaine est compris et géré de manière appropriée dans son contexte
Humans are and remain one of the critical constituents of modern transport operations. Human Reliability Analysis (HRA) methods provide a multi-disciplinary approach: systems engineering and cognitive science methods to evaluate the interaction between humans and the system. This thesis proposes a novel HRA methodology acronymed PRELUDE (Performance shaping factor based human REliability assessment using vaLUation-baseD systEms). Performance shaping factors (PSFs) are used to characterize a dangerous operational context. The proposed framework of Valuation-based System (VBS) and belief functions theory (BFT) uses mathematical rules to formalize the use of expert data and construction of a human reliability model capable of representing all kinds of uncertainty. PRELUDE is able to predict the human error probability given a context, and also provide a formal feedback to reduce the said probability. The second part of this work demonstrates the feasibility of PRELUDE with empirical data from simulators. A protocol to obtain data, a transformation and data analysis method is presented. An experimental simulator campaign is carried out to illustrate the proposition. Thus, PRELUDE is able to integrate data from multiple sources (empirical and expert) and types (objective and subjective). This thesis, hence address the problem of human error analysis, taking into account the evolution of the HRA domain over the years by proposing a novel HRA methodology. It also keeps the rail industry’s usability in mind, providing a quantitative results which can easily be integrated with traditional risk analyses. In an increasingly complex and demanding world, PRELUDE will provide rail operators and regulatory authorities a method to ensure human interaction-related risk is understood and managed appropriately in its context
Style APA, Harvard, Vancouver, ISO itp.
35

Hu, Huafen. "Risk-conscious design of off-grid solar energy houses". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31814.

Pełny tekst źródła
Streszczenie:
Thesis (Ph.D)--Architecture, Georgia Institute of Technology, 2010.
Committee Chair: Godfried Augenbroe; Committee Member: Ellis Johnson; Committee Member: Pieter De Wilde; Committee Member: Ruchi Choudhary; Committee Member: Russell Gentry. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Style APA, Harvard, Vancouver, ISO itp.
36

Wallnerström, Carl Johan. "On Incentives affecting Risk and Asset Management of Power Distribution". Doctoral thesis, KTH, Elektroteknisk teori och konstruktion, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-37310.

Pełny tekst źródła
Streszczenie:
The introduction of performance based tariff regulations along with higher media and political pressure have increased the need for well-performed risk and asset management applied to electric power distribution systems (DS), which is an infrastructure considered as a natural monopoly. Compared to other technical systems, DS have special characteristics which are important to consider. The Swedish regulation of DS tariffs between 1996 and 2012 is described together with complementary laws such as customer compensation for long outages. The regulator’s rule is to provide incentives for cost efficient operation with acceptable reliability and reasonable tariff levels. Another difficult task for the regulator is to settle the complexity, i.e. the balance between considering many details and the manageability. Two performed studies of the former regulatory model, included in this thesis, were part of the criticism that led to its fall. Furthermore, based on results from a project included here, initiated by the regulator to review a model to judge effectible costs, the regulator changed some initial plans concerning the upcoming regulation.   A classification of the risk management divided into separate categories is proposed partly based on a study investigating investment planning and risk management at a distribution system operator (DSO). A vulnerability analysis method using quantitative reliability analyses is introduced aimed to indicate how available resources could be better utilized and to evaluate whether additional security should be deployed for certain forecasted events. To evaluate the method, an application study has been performed based on hourly weather measurements and detailed failure reports over eight years for two DS. Months, weekdays and hours have been compared and the vulnerability of several weather phenomena has been evaluated. Of the weather phenomena studied, heavy snowfall and strong winds significantly affect the reliability, while frost, rain and snow depth have low or no impact. The main conclusion is that there is a need to implement new, more advanced, analysis methods. The thesis also provides a statistical validation method and introduces a new category of reliability indices, RT.
Distribution av elektricitet är att betrakta som ett naturligt monopol och är med stor sannolikhet det moderna samhällets viktigaste infrastruktur – och dess betydelse förutspås öka ytterligare i takt med implementering av teknik ämnad att minska mänsklighetens klimatpåverkan. I Sverige finns det fler än 150 elnätsbolag, vilka är av varierande storleksordning och med helt olika ägarstrukturer. Tidigare var handel med elektricitet integrerat i elnätsbolagens verksamhet, men 1996 avreglerades denna; infrastruktur för överföring separerades från produktion och handel. Införandet av kvalitetsreglering av elnätstariffer under början av 2000-talet och hårdare lagar om bland annat kundavbrottsersättning samt politiskt- och medialt tryck har givit incitament till kostnadseffektivitet med bibehållen god leveranskvalitet. En viktig aspekt är att eldistribution har, jämfört med andra infrastrukturer, flera speciella egenskaper som måste beaktas, vilket beskrives i avhandlingens första del tillsammans med introduktion av risk- och tillförlitlighetsteori samt ekonomisk teori.  Två studier som kan ha bidragit till den förra regleringens fall och en studie vars resultat ändrat reglermyndighetens initiala idé avseende modell för att beräkna påverkbara kostnader i kommande förhandsreglering från 2012 är inkluderade.   Av staten utsedd myndighet övervakar att kunder erbjudes elnätsanslutning och att tjänsten uppfyller kvalitetskrav samt att tariffnivåerna är skäliga och icke diskriminerande. Traditionellt har elnätsföretag mer eller mindre haft tillåtelse till intäkter motsvarande samtliga omkostnader och skälig vinst, så kallad självkostnadsprissättning. Under slutet av 1990-talet började ansvarig myndighet emellertid arbeta mot en reglering av intäktsram som även beaktar kostnadseffektivitet och kundkvalitet. Vid utformande av en sådan reglering måste svåra avvägningar göras. Exempelvis bör elnätsföretags objektiva förutsättningar, såsom terräng och kunder, tas i beaktning samtidigt som modellen bör vara lätthanterlig och konsekvent. Myndigheten ansåg ingen existerande reglermodell vara lämplig att anpassa till svenska förhållanden, så en ny modell utvecklades: Nätnyttomodellen (NNM). För 2003 års tariffer användes denna och beslut om krav på återbetalning till berörda elnätskunder togs, vilka överklagades. En utdragen juridisk process inleddes, där modellen kritiserades hårt av branschen på flera punkter. Två, i avhandlingen inkluderade studier, underbyggde kritisk argumentation mot NNM. Beslut i första instans (Länsrätt) hade inte tagits 2008 då parterna kom överens avseende år 2003-2007. Ett EU-direktiv tvingar Sverige att gå över till förhandsreglering, och i stället för att modifiera NNM och fortsätta strida juridiskt för den, togs beslut att ta fram en helt ny modell. Nätföretagens tillåtna intäktsram kommer förenklat grunda sig på elnätsföretagens kapitalkostnader och löpande kostnader. Därtill, utifrån hur effektivt och med vilken kvalitet nätföretagen bedrivit sin verksamhet, kan tillåten intäktsram justeras.   En systematisk beskrivning av ett elnätsföretags nuvarande riskhantering och investeringsstrategier för olika spänningsnivåer tillhandahålles med syfte att stödja elnätsföretag i utvecklandet av riskhantering och att ge akademiskt referensmaterial baserat på branscherfarenhet. En klassificering av riskhantering uppdelat i olika kategorier och en sårbarhetsanalysmetod samt en ny tillförlitlighetsindexkategori (RT) föreslås i avhandlingen, delvis baserat på genomförd studie. Sårbarhetsanalysens övergripande idé är att identifiera och utvärdera möjliga systemtillstånd med hjälp av kvantitativa tillförlitlighetsanalyser. Målet är att detta skall vara ett verktyg för att nyttja tillgängliga resurser effektivare, t.ex. förebyggande underhåll och semesterplanering samt för att bedöma om förebyggande åtgärder baserat på väderprognoser vore lämpligt. RT är en flexibel kategori av mått på sannolikhet för kundavbrott ≥T timmar, vilket exempelvis är användbart för analys av kundavbrottsersättningslagars påverkan; sådana har exempelvis införts i Sverige och UK under 2000-talet. En statistisk valideringsmetod av tillförlitlighetsindex har tagits fram för att uppskatta statistisk osäkerhet som funktion av antal mätdata ett tillförlitlighetsindexvärde är baseras på.   För att utvärdera introducerad sårbarhetsanalysmetod har en studie utförts baserat på timvisa väderdata och detaljerad avbrottsstatistik avseende åtta år för två olika eldistributionsnät i Sverige. Månader, veckodagar och timmar har jämförts vars resultat exempelvis kan användas för fördelning av resurser mer effektivt över tid. Sårbarhet med avseende på olika väderfenomen har utvärderats. Av de studerade väderfenomen är det blott ymnigt snöfall och hårda vindar, särskilt i kombination, som signifikant påverkar eldistributionssystems tillförlitlighet. Andra studier har visat på sårbarhet även för blixtnedslag (som ej fanns med som parameter i avhandlingen inkluderad studie). Temperatur (t.ex. inverkan av frost), regn och snödjup har således försumbar påverkan. Korrelationsstudier har utförts vilket bland annat visar på ett nästan linjärt samband i Sverige mellan temperatur och elförbrukning, vilket indirekt indikerar att även elförbrukning har försumbar påverkan på leveranskvalitet. Slutligen föreslås ett analysramverk som introducerad sårbarhetsanalys skulle vara en del av. Övergripande idé presenteras, vilket främst skall inspirera för fortsatt arbete; emellertid bör påpekas att introducerad sårbarhetsanalysmetod är en självständig och färdig metod oavsett om föreslagna idéer genomföres eller ej.
QC 20110815
Style APA, Harvard, Vancouver, ISO itp.
37

Nourbakhsh, Ghavameddin. "Reliability analysis and economic equipment replacement appraisal for substation and sub-transmission systems with explicit inclusion of non-repairable failures". Thesis, Queensland University of Technology, 2011. https://eprints.qut.edu.au/40848/1/Ghavameddin_Nourbakhsh_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
The modern society has come to expect the electrical energy on demand, while many of the facilities in power systems are aging beyond repair and maintenance. The risk of failure is increasing with the aging equipments and can pose serious consequences for continuity of electricity supply. As the equipments used in high voltage power networks are very expensive, economically it may not be feasible to purchase and store spares in a warehouse for extended periods of time. On the other hand, there is normally a significant time before receiving equipment once it is ordered. This situation has created a considerable interest in the evaluation and application of probability methods for aging plant and provisions of spares in bulk supply networks, and can be of particular importance for substations. Quantitative adequacy assessment of substation and sub-transmission power systems is generally done using a contingency enumeration approach which includes the evaluation of contingencies, classification of the contingencies based on selected failure criteria. The problem is very complex because of the need to include detailed modelling and operation of substation and sub-transmission equipment using network flow evaluation and to consider multiple levels of component failures. In this thesis a new model associated with aging equipment is developed to combine the standard tools of random failures, as well as specific model for aging failures. This technique is applied in this thesis to include and examine the impact of aging equipments on system reliability of bulk supply loads and consumers in distribution network for defined range of planning years. The power system risk indices depend on many factors such as the actual physical network configuration and operation, aging conditions of the equipment, and the relevant constraints. The impact and importance of equipment reliability on power system risk indices in a network with aging facilities contains valuable information for utilities to better understand network performance and the weak links in the system. In this thesis, algorithms are developed to measure the contribution of individual equipment to the power system risk indices, as part of the novel risk analysis tool. A new cost worth approach was developed in this thesis that can make an early decision in planning for replacement activities concerning non-repairable aging components, in order to maintain a system reliability performance which economically is acceptable. The concepts, techniques and procedures developed in this thesis are illustrated numerically using published test systems. It is believed that the methods and approaches presented, substantially improve the accuracy of risk predictions by explicit consideration of the effect of equipment entering a period of increased risk of a non-repairable failure.
Style APA, Harvard, Vancouver, ISO itp.
38

Huang, Min-Feng. "Resilience in chronic disease : the relationships among risk factors, protective factors, adaptive outcomes, and the level of resilience in adults with diabetes". Thesis, Queensland University of Technology, 2009. https://eprints.qut.edu.au/30313/1/Min-Feng_Huang_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
Background: There are innumerable diabetes studies that have investigated associations between risk factors, protective factors, and health outcomes; however, these individual predictors are part of a complex network of interacting forces. Moreover, there is little awareness about resilience or its importance in chronic disease in adulthood, especially diabetes. Thus, this is the first study to: (1) extensively investigate the relationships among a host of predictors and multiple adaptive outcomes; and (2) conceptualise a resilience model among people with diabetes. Methods: This cross-sectional study was divided into two research studies. Study One was to translate two diabetes-specific instruments (Problem Areas In Diabetes, PAID; Diabetes Coping Measure, DCM) into a Chinese version and to examine their psychometric properties for use in Study Two in a convenience sample of 205 outpatients with type 2 diabetes. In Study Two, an integrated theoretical model is developed and evaluated using the structural equation modelling (SEM) technique. A self-administered questionnaire was completed by 345 people with type 2 diabetes from the endocrine outpatient departments of three hospitals in Taiwan. Results: Confirmatory factor analyses confirmed a one-factor structure of the PAID-C which was similar to the original version of the PAID. Strong content validity of the PAID-C was demonstrated. The PAID-C was associated with HbA1c and diabetes self-care behaviours, confirming satisfactory criterion validity. There was a moderate relationship between the PAID-C and the Perceived Stress Scale, supporting satisfactory convergent validity. The PAID-C also demonstrated satisfactory stability and high internal consistency. A four-factor structure and strong content validity of the DCM-C was confirmed. Criterion validity demonstrated that the DCM-C was significantly associated with HbA1c and diabetes self-care behaviours. There was a statistical correlation between the DCM-C and the Revised Ways of Coping Checklist, suggesting satisfactory convergent validity. Test-retest reliability demonstrated satisfactory stability of the DCM-C. The total scale of the DCM-C showed adequate internal consistency. Age, duration of diabetes, diabetes symptoms, diabetes distress, physical activity, coping strategies, and social support were the most consistent factors associated with adaptive outcomes in adults with diabetes. Resilience was positively associated with coping strategies, social support, health-related quality of life, and diabetes self-care behaviours. Results of the structural equation modelling revealed protective factors had a significant direct effect on adaptive outcomes; however, the construct of risk factors was not significantly related to adaptive outcomes. Moreover, resilience can moderate the relationships among protective factors and adaptive outcomes, but there were no interaction effects of risk factors and resilience on adaptive outcomes. Conclusion: This study contributes to an understanding of how risk factors and protective factors work together to influence adaptive outcomes in blood sugar control, health-related quality of life, and diabetes self-care behaviours. Additionally, resilience is a positive personality characteristic and may be importantly involved in the adjustment process among people living with type 2 diabetes.
Style APA, Harvard, Vancouver, ISO itp.
39

Huang, Min-Feng. "Resilience in chronic disease : the relationships among risk factors, protective factors, adaptive outcomes, and the level of resilience in adults with diabetes". Queensland University of Technology, 2009. http://eprints.qut.edu.au/30313/.

Pełny tekst źródła
Streszczenie:
Background: There are innumerable diabetes studies that have investigated associations between risk factors, protective factors, and health outcomes; however, these individual predictors are part of a complex network of interacting forces. Moreover, there is little awareness about resilience or its importance in chronic disease in adulthood, especially diabetes. Thus, this is the first study to: (1) extensively investigate the relationships among a host of predictors and multiple adaptive outcomes; and (2) conceptualise a resilience model among people with diabetes. Methods: This cross-sectional study was divided into two research studies. Study One was to translate two diabetes-specific instruments (Problem Areas In Diabetes, PAID; Diabetes Coping Measure, DCM) into a Chinese version and to examine their psychometric properties for use in Study Two in a convenience sample of 205 outpatients with type 2 diabetes. In Study Two, an integrated theoretical model is developed and evaluated using the structural equation modelling (SEM) technique. A self-administered questionnaire was completed by 345 people with type 2 diabetes from the endocrine outpatient departments of three hospitals in Taiwan. Results: Confirmatory factor analyses confirmed a one-factor structure of the PAID-C which was similar to the original version of the PAID. Strong content validity of the PAID-C was demonstrated. The PAID-C was associated with HbA1c and diabetes self-care behaviours, confirming satisfactory criterion validity. There was a moderate relationship between the PAID-C and the Perceived Stress Scale, supporting satisfactory convergent validity. The PAID-C also demonstrated satisfactory stability and high internal consistency. A four-factor structure and strong content validity of the DCM-C was confirmed. Criterion validity demonstrated that the DCM-C was significantly associated with HbA1c and diabetes self-care behaviours. There was a statistical correlation between the DCM-C and the Revised Ways of Coping Checklist, suggesting satisfactory convergent validity. Test-retest reliability demonstrated satisfactory stability of the DCM-C. The total scale of the DCM-C showed adequate internal consistency. Age, duration of diabetes, diabetes symptoms, diabetes distress, physical activity, coping strategies, and social support were the most consistent factors associated with adaptive outcomes in adults with diabetes. Resilience was positively associated with coping strategies, social support, health-related quality of life, and diabetes self-care behaviours. Results of the structural equation modelling revealed protective factors had a significant direct effect on adaptive outcomes; however, the construct of risk factors was not significantly related to adaptive outcomes. Moreover, resilience can moderate the relationships among protective factors and adaptive outcomes, but there were no interaction effects of risk factors and resilience on adaptive outcomes. Conclusion: This study contributes to an understanding of how risk factors and protective factors work together to influence adaptive outcomes in blood sugar control, health-related quality of life, and diabetes self-care behaviours. Additionally, resilience is a positive personality characteristic and may be importantly involved in the adjustment process among people living with type 2 diabetes.
Style APA, Harvard, Vancouver, ISO itp.
40

King, Caleb B. "Bridging the Gap: Selected Problems in Model Specification, Estimation, and Optimal Design from Reliability and Lifetime Data Analysis". Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/73165.

Pełny tekst źródła
Streszczenie:
Understanding the lifetime behavior of their products is crucial to the success of any company in the manufacturing and engineering industries. Statistical methods for lifetime data are a key component to achieving this level of understanding. Sometimes a statistical procedure must be updated to be adequate for modeling specific data as is discussed in Chapter 2. However, there are cases in which the methods used in industrial standards are themselves inadequate. This is distressing as more appropriate statistical methods are available but remain unused. The research in Chapter 4 deals with such a situation. The research in Chapter 3 serves as a combination of both scenarios and represents how both statisticians and engineers from the industry can join together to yield beautiful results. After introducing basic concepts and notation in Chapter 1, Chapter 2 focuses on lifetime prediction for a product consisting of multiple components. During the production period, some components may be upgraded or replaced, resulting in a new ``generation" of component. Incorporating this information into a competing risks model can greatly improve the accuracy of lifetime prediction. A generalized competing risks model is proposed and simulation is used to assess its performance. In Chapter 3, optimal and compromise test plans are proposed for constant amplitude fatigue testing. These test plans are based on a nonlinear physical model from the fatigue literature that is able to better capture the nonlinear behavior of fatigue life and account for effects from the testing environment. Sensitivity to the design parameters and modeling assumptions are investigated and suggestions for planning strategies are proposed. Chapter 4 considers the analysis of ADDT data for the purposes of estimating a thermal index. The current industry standards use a two-step procedure involving least squares regression in each step. The methodology preferred in the statistical literature is the maximum likelihood procedure. A comparison of the procedures is performed and two published datasets are used as motivating examples. The maximum likelihood procedure is presented as a more viable alternative to the two-step procedure due to its ability to quantify uncertainty in data inference and modeling flexibility.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
41

Murad, Carlos Alberto. "Desenvolvimento de novos produtos considerando aspectos de confiabilidade, risco e ferramentas de qualidade". Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/3/3151/tde-29082011-111759/.

Pełny tekst źródła
Streszczenie:
A intensa competição no mercado global e as constantes mudanças nas exigências dos clientes têm feito com que muitas empresas repensem seus processos de negócios não somente para sobreviver, mas também para se manterem competitivas no mercado atual. O processo de desenvolvimento de produtos é um fator importante para qualquer empresa se manter competitiva neste cenário. A falta de um bom processo de desenvolvimento de produtos é sem dúvida uma grande desvantagem para uma empresa. Somente um bom processo de desenvolvimento não garante a vantagem competitiva das empresas, é necessário também que seus produtos sejam confiáveis e para que isto aconteça torna-se essencial desenvolver produtos com qualidade, através do uso disciplinado e constante de ferramentas de qualidade. Para ser competitivo um produto precisa ser desenvolvido com o mínimo de tempo, recursos e custo, para atender às necessidades de mercado. Algumas metodologias foram desenvolvidas e estas focam no desenvolvimento de um produto sempre pensando nas necessidades da manufatura, montagem, qualidade, confiabilidade e ciclo de vida do produto, evitando mudanças tardias no produto. Muitos estudos acadêmicos e industriais têm sido propostos nesta área. Cada empresa deve encontrar e se adaptar ao processo ou modelo mais adequado para ela dentro das suas necessidades técnicas e culturais. Este estudo apresenta uma metodologia a ser usada para melhorar a qualidade do produto e deve ser usada quando da fase conceitual onde se escolhem os melhores sistemas e/ou componentes para formar um novo produto final.
The intense competition in global market along with constant changes in customers demands have forced companies to re-think some of their business processes, not only to survive, but also to stay competitive on this market. The product development process is one of the key business processes for any company to stay competitive and global on this scenario. The lack of a good development process is with no doubt a big disadvantage for any company. Only a good development process does not guarantee a competitive advantage for anyone, it becomes necessary to have reliable products in the field and to make this happens it is vital to develop products with quality through the use of quality tools in a constant and disciplined way. To be competitive, a product needs to be designed in a minimum amount of time, with minimum resources and cost. To meet market needs some methodologies were developed thinking on manufacturing, assembly, quality, reliability and life cycle avoiding late product changes. Many studies academic and industrial have been proposed in this area. Each company has to find and adapt the most appropriate model that fits its technical and cultural needs. This research presents a methodology to be used to improve product quality during the early phases of development when systems and/or components are chosen for a new product.
Style APA, Harvard, Vancouver, ISO itp.
42

Maher, Patrick S. "Identifying and enabling core management competencies and compliance factors in high reliability organisations : a study in organisational risk management psychology and training: A small n modified grounded theory qualitative analysis". Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2004. https://ro.ecu.edu.au/theses/819.

Pełny tekst źródła
Streszczenie:
High reliability entities governed by statutory regulations are required to comply with safety guidelines and specifications. When fatalities or serious injuries occur in otherwise preventable accidents these entities are routinely exonerated from any responsibility by claiming to have ‘systemic management problems’ and their managing coalitions have been able to hide behind the ‘corporate veil’. This thesis maintains that the core managerial competencies needed to prevent preventable accidents, can be acquired through training, particularly if their mastery is mandated by a strong regulatory and compliance regime. The cases chosen for analysis revealed ten core managerial and organisational competencies and compliance as issues of concern, in a small n study Commission of Inquiry and Coronial reports. Other than ‘acts of God’, most accidents resulting in fatalities and serious injury, occur in organisations where prior knowledge of a potential accident existed and this knowledge was not utilised. Most accidents in high reliability organisations might have been prevented if the cascade of events leading to the accidents could have been interrupted. The competencies, revealed by the research as necessary to intervene in the unfolding of preventable accidents, are generally not taught in orthodox management studies programs in higher education institutions. However, when these competencies are inadequate they not only result in accidents but also cause orthodox management problems such as production delays and losses, costly litigation, increasing indemnity insurance and erosion of an organisation’s credibility in the marketplace.
Style APA, Harvard, Vancouver, ISO itp.
43

Luo, Yan. "Radical Architecture, Collective Mindfulness, and Information Technology: A Dialectical Analysis of Risk Control in Complex Socio-Technical Systems". online version, 2009. http://rave.ohiolink.edu/etdc/view.cgi?acc%5Fnum=case1228450166.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Case Western Reserve University, 2009.
Department of Information Systems, Weatherhead School of Management. Includes bibliographical references. Available online via OhioLINK's ETD Center.
Style APA, Harvard, Vancouver, ISO itp.
44

Henneaux, Pierre. "A two-level Probabilistic Risk Assessment of cascading failures leading to blackout in transmission power systems". Doctoral thesis, Universite Libre de Bruxelles, 2013. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209433.

Pełny tekst źródła
Streszczenie:
In our society, private and industrial activities increasingly rest on the implicit assumption that electricity is available at any time and at an affordable price. Even if operational data and feedback from the electrical sector is very positive, a residual risk of blackout or undesired load shedding in critical zones remains. The occurrence of such a situation is likely to entail major direct and indirect economical consequences, as observed in recent blackouts. Assessing this residual risk and identifying scenarios likely to lead to these feared situations is crucial to control and optimally reduce this risk of blackout or major system disturbance. The objective of this PhD thesis is to develop a methodology able to reveal scenarios leading to a blackout or a major system disturbance and to estimate their frequencies and their consequences with a satisfactory accuracy.

A blackout is a collapse of the electrical grid on a large area, leading to a power cutoff, and is due to a cascading failure. Such a cascade is composed of two phases: a slow cascade, starting with the occurrence of an initiating event and displaying characteristic times between successive events from minutes to hours, and a fast cascade, displaying characteristic times between successive events from milliseconds to tens of seconds. In cascading failures, there is a strong coupling between events: the loss of an element increases the stress on other elements and, hence, the probability to have another failure. It appears that probabilistic methods proposed previously do not consider correctly these dependencies between failures, mainly because the two very different phases are analyzed with the same model. Thus, there is a need to develop a conceptually satisfying probabilistic approach, able to take into account all kinds of dependencies, by using different models for the slow and the fast cascades. This is the aim of this PhD thesis.

This work first focuses on the level-I which is the analysis of the slow cascade progression up to the transition to the fast cascade. We propose to adapt dynamic reliability, an integrated approach of Probabilistic Risk Analysis (PRA) developed initially for the nuclear sector, to the case of transmission power systems. This methodology will account for the double interaction between power system dynamics and state transitions of the grid elements. This PhD thesis also introduces the development of the level-II to analyze the fast cascade, up to the transition towards an operational state with load shedding or a blackout. The proposed method is applied to two test systems. Results show that thermal effects can play an important role in cascading failures, during the first phase. They also show that the level-II analysis after the level-I is necessary to have an estimation of the loss of supplied power that a scenario can lead to: two types of level-I scenarios with a similar frequency can induce very different risks (in terms of loss of supplied power) and blackout frequencies. The level-III, i.e. the restoration process analysis, is however needed to have an estimation of the risk in terms of loss of supplied energy. This PhD thesis also presents several perspectives to improve the approach in order to scale up applications to real grids.


Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished

Style APA, Harvard, Vancouver, ISO itp.
45

Bhandaram, Abhinav. "Detecting Component Failures and Critical Components in Safety Critical Embedded Systems using Fault Tree Analysis". Thesis, University of North Texas, 2018. https://digital.library.unt.edu/ark:/67531/metadc1157555/.

Pełny tekst źródła
Streszczenie:
Component failures can result in catastrophic behaviors in safety critical embedded systems, sometimes resulting in loss of life. Component failures can be treated as off nominal behaviors (ONBs) with respect to the components and sub systems involved in an embedded system. A lot of research is being carried out to tackle the problem of ONBs. These approaches are mainly focused on the states (i.e., desired and undesired states of a system at a given point of time to detect ONBs). In this paper, an approach is discussed to detect component failures and critical components of an embedded system. The approach is based on fault tree analysis (FTA), applied to the requirements specification of embedded systems at design time to find out the relationship between individual component failures and overall system failure. FTA helps in determining both qualitative and quantitative relationship between component failures and system failure. Analyzing the system at design time helps in detecting component failures and critical components and helps in devising strategies to mitigate component failures at design time and improve overall safety and reliability of a system.
Style APA, Harvard, Vancouver, ISO itp.
46

Hofer, Lorenzo. "Loss assessment models for seismic risk mitigation in structures". Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3424961.

Pełny tekst źródła
Streszczenie:
Seismic risk can be defined as an inclusive term that encompasses the probability of different ground motions and the related consequences, depending on the structural vulnerability. Seismic risk analysis is a general procedure that usually can consider different indicators, for both a specific structure or at territorial level: among others, for civil structures, risk is expressed in terms of monetary losses, i.e. costs to be sustained for repairing seismic damage or loss of revenue. This work wants to contribute to the current seismic risk assessment approaches with original contributions to the analysis of both point-like and territorial assets, focusing on some aspects, that are still not or poorly treated in literature. Regarding seismic risk analysis for a single specific structure, this work focuses on seismic risk analysis of industrial productive processes, with particular reference to business interruption losses. Recent seismic events, as the Emilia-Romagna earthquake in 2012, showed that such type of indirect losses can be very significant, and therefore a model is proposed to fill this lack of models for assessing indirect losses due to business interruption. Furthermore, a financial framework is also set up to assess the optimal seismic retrofit strategy for productive processes. In regard to the seismic risk analysis at territorial level, a seismic risk map of Italy is developed. Some considerations on historical losses and the implementation of specific earthquake catastrophe funds is also discussed. Finally, a deep insight on Catastrophe bonds (CAT bonds), as financial tool for transferring potential losses arising from natural hazards is illustrated. In particular, a novel reliability-based CAT bond pricing framework is developed, and applied to a case study represented by the Italian residential building portfolio.
Il rischio sismico può essere definito come un termine riassuntivo che comprende la probabilità del verificarsi in un certo sito di differenti campi di scuotimento, le perdite correlate, considerando la vulnerabilità strutturale. L’analisi di rischio è un metodo generale che può far riferimento a più indicatori in base al problema indagato, sia a livello di struttura specifica, sia a livello territoriale; per strutture civili, spesso si fa riferimento alle perdite monetarie, cioè il costo che deve essere sostenuto per riparare il danno strutturale derivante dal sisma. Questo lavoro approfondisce il rischio sismico sia a livello locale/puntuale, sia a livello territoriale, focalizzandosi su temi ancora poco approfonditi. A livello locale, la tesi si concentra sull’analisi di rischio sismico in ambito industriale con particolare riferimento ai danni da interruzione di esercizio. Recenti eventi sismici, come il terremoto in Emilia del 2012, hanno infatti dimostrato come tale tipologia di perdite possa essere particolarmente significativa; viene quindi sviluppato un modello per il calcolo delle perdite da interruzione di esercizio. Viene inoltre sviluppato un framework per valutare l’ottima strategia di retrofit sismico per la filiera produttiva. Nell’ambito dello studio del rischio sismico su scala territoriale, viene calcolata la mappa di rischio sismico per il territorio italiano. Vengono poi fatte alcune considerazioni sulle perdite causate dai terremoti passati, e sulla possibile implementazione di un fondo catastrofale nazionale. Infine, questo lavoro approfondisce i Catastrophe bond (CAT bond) come strumento finanziario per il trasferimento del rischio da disastri naturali. In particolare, viene sviluppata una procedura matematica rigorosa, basata su un approccio affidabilistico, per il pricing dei CAT bond. Tale procedura viene quindi applicata ad un caso studio e i risultati sono ampiamente discussi.
Style APA, Harvard, Vancouver, ISO itp.
47

Pereira, José Cristiano. "Modelo causal para análise probabilística de risco de falhas de motores a jato em situação operacional de fabricação". Niterói, 2017. https://app.uff.br/riuff/handle/1/4078.

Pełny tekst źródła
Streszczenie:
Submitted by Secretaria Pós de Produção (tpp@vm.uff.br) on 2017-07-27T19:21:56Z No. of bitstreams: 1 D2014 - José Cristiano Pereira.pdf: 9830334 bytes, checksum: d5be51799514c74451d0ca3358d7757b (MD5)
Made available in DSpace on 2017-07-27T19:21:56Z (GMT). No. of bitstreams: 1 D2014 - José Cristiano Pereira.pdf: 9830334 bytes, checksum: d5be51799514c74451d0ca3358d7757b (MD5)
O processo de fabricação de motores a jato é complexo. Perigos e riscos e muitos elementos críticos estão presentes em milhares de atividades necessárias para fabricar um motor. Na investigação realizada nota-se a inexistência de um modelo específico para calcular quantitativamente a probabilidade de falha operacional de um motor à jato. O objetivo da tese foi desenvolver um modelo causal para análise de risco probabilística de falhas de motores a jato em situação operacional de fabricação. O modelo se caracteriza pela aplicação de rede Bayesiana associada à árvore de falha / árvore de evento e elicitação de probabilidades por especialistas para quantificar a probabilidade de falha. Para a concepção da construção do modelo, foi inicialmente desenvolvida uma pesquisa bibliométrica, através da consulta aos principais motores de busca nacionais e internacionais, em periódicos científicos e técnicos, bancos de dissertações/teses e eventos técnicos relacionados ao tema, para estabelecimento dos estado-da-arte e da técnica. Para a estimativa das probabilidades associadas aos cenários de falhas propostos, foi desenvolvido um processo de elicitação de probabilidade a partir da consulta a especialistas e técnicos. Na concepção do modelo foram consideradas três áreas de influência para a confiabilidade do sistema: humana, software e calibração. Como resultado foi desenvolvido o modelo CAPEMO, que é suportado por um aplicativo que utiliza a teoria das probabilidades (Lei de Bayes) para modelar incerteza. A probabilidade de falha estimada ao final da processo de fabricação, antes do motor ser colocado em operação, contribui no processo de tomada de decisão, melhoria da segurança do sistema e redução de riscos de falha do motor em operação
The process of jet engines manufacturing is complex. Hazards and risks and many critical elements are present in the thousands of activities required to manufacture an engine. In the conducted investigation it is observed a lack of a specific model to estimate quantitatively the probability of a jet engine operational failure. The goal of this thesis is to develop a causal model for probabilistic risk analysis of jet engines failure in manufacturing situational operation. The model is characterized by the application of Bayesian Network associated with the fault tree and event tree to quantify the probability of failure. For the establishment of state-of-the-art and technique and for the conception and construction of the model, a bibliometric research was conducted in the main national and international search engines, in the scientific and technical journals, in the database of dissertations/theses and technical events related to the topic. For the estimation of the probabilities associated with the proposed fault scenarios, a process of probability elicitation from technicians and experts was developed. In the design of the model three areas of influence for the reliability of the system were considered: human, software and calibration. As a result CAPEMO model was developed, that is supported by a software application that uses probability theory to model uncertainty. The probability of engine failure estimated at the end of the manufacturing process, before the motor be put into operation, helps in the allocation of resources in the decision-making process and improves system safety reducing the risk of engine failure in operation
Style APA, Harvard, Vancouver, ISO itp.
48

Attasek, Ondřej. "Analýza rizik obsluhy jeřábu". Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2018. http://www.nusl.cz/ntk/nusl-377647.

Pełny tekst źródła
Streszczenie:
The topic of this master's thesis is focused on the area of safety in lifting technology. Specifically, to prevent mistakes during bridge crane operations and to increase reliability of the human factor. The thesis summarizes the most important legislative regulations for crane operators, including the requirements for operation. Furthermore, there are analysis of bridge crane accidents including data about number of occupational accidents. Other part of the thesis deals with analysis HTA, human HAZOP, BOMECH and FMEA. Conclusions are set based on these analyzes and preventive measures are suggested.
Style APA, Harvard, Vancouver, ISO itp.
49

Stéphan, Maïté. "Fiabilité du temps de transport : Mesures, valorisation monétaire et intégration dans le calcul économique public". Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTD072/document.

Pełny tekst źródła
Streszczenie:
Cette thèse aborde la question de la fiabilité du temps de transport. L’étude de la fiabilité du temps de transport trouve ses sources dans le fait que, dans bien des situations, le temps de transport n’est pas certain, mais aléatoire. De nombreux évènements peuvent en effet modifier le temps de transport prévu par les opérateurs ou espéré par les usagers. Par ailleurs, lors de l’évaluation socioéconomique de projets d’investissement en infrastructure de transport, il peut exister un arbitrage entre gain de temps et gain de fiabilité. Or, comme la fiabilité est encore à l’heure actuelle, difficilement intégrable dans ce type d’évaluation, ces projets d’investissement voient leur rentabilité collective sous-estimée conduisant à leurs reports. Il émerge ainsi trois problématiques majeures relatives à l’étude de la fiabilité du temps de transport : sa mesure, sa valorisation monétaire (i.e. la disposition à payer des individus pour améliorer la fiabilité du temps de transport) et enfin, sa prise en compte dans les analyses coûts-avantages. Un premier chapitre permet d’adapter les mesures usuelles de la fiabilité du temps de transport appliquées dans le cadre du transport routier, aux modes de transport collectif (fer et aérien plus particulièrement). Nous proposons également une nouvelle mesure de la fiabilité, le Delay-at-Risk (DaR) inspiré de la littérature financière. Le DaR est une transposition de la mesure de la Value-at-Risk (V aR) à l’économie des transports. Cette mesure est plus utile du point de vue des usagers pour la planification des trajets avec correspondance que les autres mesures. Le deuxième chapitre a pour principal objectif de déterminer la disposition à payer des individus pour améliorer la fiabilité du temps de transport. Nous proposons un cadre théorique inspiré de la théorie de la décision en univers risqué à partir duquel nous définissons la préférence des individus à l’égard de la fiabilité (i.e. reliabilityproneness) ainsi que la prudence. Nous développons des nouvelles mesures de la fiabilité du temps de transport, exprimées comme des primes de risque : la reliability-premium et la V OR. La reliability-premium détermine le temps de transport maximum supplémentaire qu’un individu est prêt à accepter pour supprimer l’intégralité du risque sur le temps de transport. La V OR, quant à elle, se définit comme la disposition maximale à payer d’un individu pour supprimer l’intégralité du risque sur le temps de transport. Par ailleurs, nous établissons également les conséquences sur la valeur du temps (V TTS) et de la fiabilité (V OR), de la prise en considération de l’attitude à l’égard du risque sur le temps de transport des usagers (aversion et prudence). Le dernier chapitre de cette thèse a pour objet d’intégrer la fiabilité dans les évaluations socioéconomiques de projet d’investissement et plus particulièrement dans la détermination du surplus des usagers. Nous mettonsen exergue un effet de diffusion des gains de fiabilité par rapport aux gains de temps. Ainsi, nous proposons des recommandations quant à l’arbitrage entre les projets générateurs de gain de temps et de gain de fiabilité en fonction des valeurs monétaires du temps (V TTS) et de la fiabilité (V OR)
This thesis deals with the issue of travel time reliability. The study of travel time reliability emerges from the fact that in many situations, travel time is random. Many events can change the travel time forecasted by operators or expected by users. Moreover, a tradeoff may exist between time and reliability benefits when evaluating socio economic appraisal of transport infrastructure. However, since reliability is still difficult to integrate in this type of evaluation, investment projects’ collective profitability is underestimated and often postponed. Thus, three main issues of travel time reliability analysis emerge: measurement, monetary valuation and implication for cost benefit analysis. This thesis is organized in three chapters. The first chapter adapts the measure of travel time reliability typically used in the road transport context to the collective modes (rail and air, in particular). We also develop a new reliability measure: the Delay-at-Risk (DaR). DaR is an implementation of the Value-at-Risk (V aR) measure into the transport economic framework. The DaR seem to be relevant and understandable information for the users, especially to plan their travel and avoid missing their connections. The main objective of the second chapter is to define the users’ willingness to pay to improve travel time reliability. We present a theoretical framework based on decision theory under risk. We introduce the concept of reliability-proneness (i.e. travel time risk aversion) and prudence. We develop new measures of travel time reliability expressed as risk premium: the reliability-premium and V OR. The reliability-premium is the maximum amount of additional travel time that an individual is willing to accept to escape all the risk of travel time. The V OR is defined as the maximum monetary amount that an individual is willing to pay to escape all the risk of travel time. Furthermore, we also establish the link with attitudes towards risks of travel time (aversion and prudence) and the impact of the value of travel time (V TTS) and the value of reliability (V OR). The final chapter of this thesis integrates reliability in investments project’s socioeconomic appraisal. More particularly, it allows to determine users’ surplus valuation. We highlight a diffusion effect of reliability benefits with regard to travel time benefits. Thus, we propose recommendations regarding the tradeoff between projects that generate time benefits compared with reliability benefits, according to the monetary values of travel time(V TTS) and reliability (V OR)
Style APA, Harvard, Vancouver, ISO itp.
50

Brini, Manel. "Safety-Bag pour les systèmes complexes". Thesis, Compiègne, 2018. http://www.theses.fr/2018COMP2444/document.

Pełny tekst źródła
Streszczenie:
Les véhicules automobiles autonomes sont des systèmes critiques. En effet, suite à leurs défaillances, ils peuvent provoquer des dégâts catastrophiques sur l'humain et sur l'environnement dans lequel ils opèrent. Le contrôle des véhicules autonomes robotisés est une fonction complexe, qui comporte de très nombreux modes de défaillances potentiels. Dans le cas de plateformes expérimentales qui n'ont suivi ni les méthodes de développement ni le cycle de certification requis pour les systèmes industriels, les probabilités de défaillances sont beaucoup plus importantes. En effet, ces véhicules expérimentaux se heurtent à deux problèmes qui entravent leur sûreté de fonctionnement, c'est-à-dire la confiance justifiée que l'on peut avoir dans leur comportement correct. Tout d'abord, ils sont utilisés dans des environnements ouverts, au contexte d'exécution très large. Ceci rend leur validation très complexe, puisque de nombreuses heures de test seraient nécessaires, sans garantie que toutes les fautes du système soient détectées puis corrigées. De plus, leur comportement est souvent très difficile à prédire ou à modéliser. Cela peut être dû à l'utilisation des logiciels d'intelligence artificielle pour résoudre des problèmes complexes comme la navigation ou la perception, mais aussi à la multiplicité de systèmes ou composants interagissant et compliquant le comportement du système final, par exemple en générant des comportements émergents. Une technique permettant d'augmenter la sécurité-innocuité (safety) de ces systèmes autonomes est la mise en place d'un composant indépendant de sécurité, appelé « Safety-Bag ». Ce système est intégré entre l'application de contrôle-commande et les actionneurs du véhicule, ce qui lui permet de vérifier en ligne un ensemble de nécessités de sécurité, qui sont des propriétés nécessaires pour assurer la sécurité-innocuité du système. Chaque nécessité de sécurité est composée d'une condition de déclenchement et d'une intervention de sécurité appliquée quand la condition de déclenchement est violée. Cette intervention consiste soit en une inhibition de sécurité qui empêche le système d'évoluer vers un état à risques, soit en une action de sécurité afin de remettre le véhicule autonome dans un état sûr. La définition des nécessités de sécurité doit suivre une méthode rigoureuse pour être systématique. Pour ce faire, nous avons réalisé dans nos travaux une étude de sûreté de fonctionnement basée sur deux méthodes de prévision des fautes : AMDEC (Analyse des Modes de Défaillances, leurs Effets et leur Criticité) et HazOp-UML (Etude de dangers et d'opérabilité) qui mettent l'accent respectivement sur les composants internes matériels et logiciels du système et sur l'environnement routier et le processus de conduite. Le résultat de ces analyses de risques est un ensemble d'exigences de sécurité. Une partie de ces exigences de sécurité peut être traduite en nécessités de sécurité implémentables et vérifiables par le Safety-Bag. D'autres ne le peuvent pas pour que le système Safety-Bag reste un composant relativement simple et validable. Ensuite, nous avons effectué des expérimentations basées sur l'injection de fautes afin de valider certaines nécessités de sécurité et évaluer le comportement de notre Safety-Bag. Ces expériences ont été faites sur notre véhicule robotisé de type Fluence dans notre laboratoire dans deux cadres différents, sur la piste réelle SEVILLE dans un premier temps et ensuite sur la piste virtuelle simulée par le logiciel Scanner Studio sur le banc VILAD. Le Safety-Bag reste une solution prometteuse mais partielle pour des véhicules autonomes industriels. Par contre, il répond à l'essentiel des besoins pour assurer la sécurité-innocuité des véhicules autonomes expérimentaux
Autonomous automotive vehicles are critical systems. Indeed, following their failures, they can cause catastrophic damage to the human and the environment in which they operate. The control of autonomous vehicles is a complex function, with many potential failure modes. In the case of experimental platforms that have not followed either the development methods or the certification cycle required for industrial systems, the probabilities of failure are much greater. Indeed, these experimental vehicles face two problems that impede their dependability, which is the justified confidence that can be had in their correct behavior. First, they are used in open environment, with a very wide execution context. This makes their validation very complex, since many hours of testing would be necessary, with no guarantee that all faults in the system are detected and corrected. In addition, their behavior is often very difficult to predict or model. This may be due to the use of artificial intelligence software to solve complex problems such as navigation or perception, but also to the multiplicity of systems or components interacting and complicating the behavior of the final system, for example by generating behaviors emerging. A technique to increase the safety of these autonomous systems is the establishment of an Independent Safety Component, called "Safety-Bag". This system is integrated between the control application and the actuators of the vehicle, which allows it to check online a set of safety necessities, which are necessary properties to ensure the safety of the system. Each safety necessity is composed of a safety trigger condition and a safety intervention applied when the safety trigger condition is violated. This intervention consists of either a safety inhibition that prevents the system from moving to a risk state, or a safety action to return the autonomous vehicle to a safe state. The definition of safety necessities must follow a rigorous method to be systematic. To do this, we carried out in our work a study of dependability based on two fault prevention methods: FMEA and HazOp-UML, that respectively focus on the internal hardware and software components of the system and on the road environment and driving process. The result of these risk analyzes is a set of safety requirements. Some of these safety requirements can be translated into safety necessities, implementable and verifiable by the Safety-Bag. Others cannot be implemented in the Safety-Bag. The latter must remain simple so that it is easy to be validated. Then, we carried out experiments based on the faults injection in order to validate some safety necessities and to evaluate the Safety-Bag's behavior. These experiments were done on our robotic vehicle type Fluence in our laboratory in two different settings, on the actual track SEVILLE at first and then on the virtual track simulated by the Scanner Studio software on the VILAD testbed. The Safety-Bag remains a promising but partial solution for autonomous industrial vehicles. On the other hand, it meets the essential needs for the safety of experimental autonomous vehicles
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii