Siga este link para ver outros tipos de publicações sobre o tema: Reliability (Engineering).

Teses / dissertações sobre o tema "Reliability (Engineering)"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Reliability (Engineering)".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Sasse, Guido Theodor. "Reliability engineering in RF CMOS". Enschede : University of Twente [Host], 2008. http://doc.utwente.nl/59032.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Heineman, Judie A. "A software reliability engineering case study". Thesis, Monterey, California. Naval Postgraduate School, 1996. http://hdl.handle.net/10945/8975.

Texto completo da fonte
Resumo:
Approved for public release; distribution is unlimited
Handling, identifying, and correcting faults are significant concerns for the software maanger because (1) the presence of faults in the operational software can put human life and mission success at risk in a safety critical application and (2) the entire software reliability process is expensive. Designing an effective Software Reliability Engineering (SRE) process is one method to increase reliability and reduce costs. This thesis describes a process that is being implemented at Marine Corps Tactical System Support Activity (MCTSSA), using the Schneidewind Reliability Model and the SRE process described in the American Institute of Aeronautics and Astronautics Recommended Practice in Software Reliability. In addition to applying the SRE process to single node systems, its applicability to multi-node LAN-based distributed systems is explored. Each of the SRE steps is discussed, with practical examples provided, as they would apply to a testing facility. Special attention is directed to data collection methodologies and the application of model results. in addition, a handbook and training plan are provided for use by MCTSSA during the transition to the SRE process
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Bolgren, Daniel (Daniel Reade). "High reliability performance in Amgen Engineering". Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/73439.

Texto completo da fonte
Resumo:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering; in conjunction with the Leaders for Global Operations Program at MIT, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 90).
Amgen is in the midst of a transformative initiative to become operationally more efficient. For Amgen Engineering, this initiative has prompted a reevaluation of the entire organization and brought to light the need to standardize, define processes, and promote a culture wherein reliable outcomes are both possible and expected. One way to accomplish this is by evaluating and then implementing the concepts of High Reliability Organization (HRO). This thesis focuses on using concepts such as HRO to evaluate the Engineering organization at Amgen and then provide tools, frameworks, and recommendations for driving increased reliability and greater process maturity across Amgen's entire asset lifecycle (Plan, Build/Lease, Operate/Maintain, Reinvest/Dispose). Three main deliverables resulted from this project's reliability efforts. The first deliverable is a set of recommendations and strategies to help the Engineering organization operate as an HRO. The second deliverable is an enhanced process maturity model that implements reliability concepts to drive the maturity of Engineering's business processes. The model better defines criteria for each level of maturity and will be used as a guidance tool for organizational advancement in the coming years. The last deliverable focuses on the maintain portion of the asset lifecycle, and is a Maintenance Excellence Roadmap that defines what maintenance excellence looks like and provides a strategy to best utilize the systems and tools that Amgen has in place, and will need in the future, to get there.
by Daniel Bolgren.
S.M.
M.B.A.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Lanning, David Bruce. "Fatigue reliability of cracked engineering structures /". The Ohio State University, 1997. http://rave.ohiolink.edu/etdc/view?acc_num=osu148794501561685.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Saini, Gagandeep Singh. "Reliability-based design with system reliability and design improvement". Diss., Rolla, Mo. : Missouri University of Science and Technology, 2009. http://scholarsmine.mst.edu/thesis/pdf/Saini_09007dcc8070d586.pdf.

Texto completo da fonte
Resumo:
Thesis (M.S.)--Missouri University of Science and Technology, 2009.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed November 23, 2009) Includes bibliographical references (p. 66-68).
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

ROBINSON, DAVID GERALD. "MODELING RELIABILITY IMPROVEMENT DURING DESIGN (RELIABILITY GROWTH, BAYES, NON PARAMETRIC)". Diss., The University of Arizona, 1986. http://hdl.handle.net/10150/183971.

Texto completo da fonte
Resumo:
Past research into the phenomenon of reliability growth has emphasised modeling a major reliability characteristic in terms of a specific parametric function. In addition, the time-to-failure distribution of the system was generally assumed to be exponential. The result was that in most cases the improvement was modeled as a nonhomogeneous Poisson process with intensity λ(t). Major differences among models centered on the particular functional form of the intensity function. The popular Duane model, for example, assumes that λ(t) = β(1 – α)t ⁻ᵅ. The inability of any one family of distributions or parametric form to describe the growth process resulted in a multitude of models, each directed toward answering problems encountered with a particular test situation. This thesis proposes two new growth models, neither requiring the assumption of a specific function to describe the intensity λ(t). Further, the first of the models only requires that the time-to-failure distribution be unimodal and that the reliability become no worse as development progresses. The second model, while requiring the assumption of an exponential failure distribution, remains significantly more flexible than past models. Major points of this Bayesian model include: (1) the ability to encorporate data from a number of test sources (e.g. engineering judgement, CERT testing, etc.), (2) the assumption that the failure intensity is stochastically decreasing, and (3) accountability of changes that are incorporated into the design after testing is completed. These models were compared to a number of existing growth models and found to be consistently superior in terms of relative error and mean-square error. An extension to the second model is also proposed that allows system level growth analysis to be accomplished based on subsystem development data. This is particularly significant, in that, as systems become larger and more complex, development efforts concentrate on subsystem levels of design. No analysis technique currently exists that has this capability. The methodology is applied to data sets from two actual test situations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Brunelle, Russell Dedric. "Customer-centered reliability measures for flexible multistate reliability models /". Thesis, Connect to this title online; UW restricted, 1998. http://hdl.handle.net/1773/10691.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Wickstrom, Larry E. "Reliability of Electronics". Thesis, University of North Texas, 2014. https://digital.library.unt.edu/ark:/67531/metadc700024/.

Texto completo da fonte
Resumo:
The purpose of this research is not to research new technology but how to improve existing technology and understand how the manufacturing process works. Reliability Engineering fall under the category of Quality Control and uses predictions through statistical measurements and life testing to figure out if a specific manufacturing technique will meet customer satisfaction. The research also answers choice of materials and choice of manufacturing process to provide a device that will not only meet but exceed customer demand. Reliability Engineering is one of the final testing phases of any new product development or redesign.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Hwang, Sungkun. "Predicting reliability in multidisciplinary engineering systems under uncertainty". Thesis, Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/54955.

Texto completo da fonte
Resumo:
The proposed study develops a framework that can accurately capture and model input and output variables for multidisciplinary systems to mitigate the computational cost when uncertainties are involved. The dimension of the random input variables is reduced depending on the degree of correlation calculated by relative entropy. Feature extraction methods; namely Principal Component Analysis (PCA), the Auto-Encoder (AE) algorithm are developed when the input variables are highly correlated. The Independent Features Test (IndFeaT) is implemented as the feature selection method if the correlation is low to select a critical subset of model features. Moreover, Artificial Neural Network (ANN) including Probabilistic Neural Network (PNN) is integrated into the framework to correctly capture the complex response behavior of the multidisciplinary system with low computational cost. The efficacy of the proposed method is demonstrated with electro-mechanical engineering examples including a solder joint and stretchable patch antenna examples.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Abujaafar, Khalifa Mohamed. "Quantitative human reliability assessment in marine engineering operations". Thesis, Liverpool John Moores University, 2012. http://researchonline.ljmu.ac.uk/6115/.

Texto completo da fonte
Resumo:
Marine engineering operations rely substantially on high degrees of automation and supervisory control. This brings new opportunities as well as the threat of erroneous human actions, which account for 80-90% of marine incidents and accidents. In this respect, shipping environments are extremely vulnerable. As a result, decision makers and stakeholders have zero tolerance for accidents and environmental damage, and require high transparency on safety issues. The aim of this research is to develop a novel quantitative Human Reliability Assessment (HRA) methodology using the Cognitive Reliability and Error Analysis Method (CREAM) in the maritime industry. This work will facilitate risk assessment of human action and its applications in marine engineering operations. The CREAM model demonstrates the dynamic impact of a context on human performance reliability through Contextual Control Model controlling modes (COCOM-CMs). CREAM human action analysis can be carried out through the core functionality of a method, a classification scheme and a cognitive model. However, CREAM has exposed certain practical limitations in its applications especially in the maritime industry, including the large interval presentation of Human Failure Probability (HFP) values and the lack of organisational factors in its classification scheme. All of these limitations stimulate the development of advanced techniques in CREAM as well as illustrate the significant gap between industrial needs and academic research. To address the above need, four phases of research study are proposed. In the first phase, the adequacy of organisation, one of the key Common Performance Conditions (CPCs) in CREAM, is expanded by identifying the associated Performance Influencing Factors (PIFs) and sub-PIFs in a Bayesian Network (BN) for realising the rational quantification of its assessment. In the second phase, the uncertainty treatment methods' BN, Fuzzy Rule Base (FRB) , Fuzzy Set (FS) theory are used to develop new models and techniques' that enable users to quantify HFP and facilitate the identification of possible initiating events or root causes of erroneous human action in marine engineering operations. In the third phase, the uncertainty treatment method's Evidential Reasoning (ER) is used in correlation with the second phase's developed new models and techniques to produce the solutions to conducting quantitative HRA in conditions in which data is unavailable, incomplete or ill-defined. In the fourth phase, the CREAM's prospective assessment and retrospective analysis models are integrated by using the established Multiple Criteria Decision Making (MCDM) method based on, the combination of Analytical Hierarchical Process (AHP), entropy analysis and Technique for Order Preference by Similarity to the Ideal Solution (TOPSIS). These enable Decision Makers (DMs) to select the best developed Risk Control Option (RCO) in reducing HFP values. The developed methodology addresses human actions in marine engineering operations with the significant potential of reducing HFP, promoting safety culture and facilitating the current Safety Management System (SMS) and maritime regulative frameworks. Consequently, the resilience of marine engineering operations can be further strengthened and appreciated by industrial stakeholders through addressing the requirements of more safety management attention at all levels. Finally, several real case studies are investigated to show end users tangible benefits of the developed models, such as the reduction of the HFPs and optimisation of risk control resources, while validating the algorithms, models, and methods developed in this thesis.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Masiello, Gregory L. "Reliability the life cycle driver : an examination of reliability management culture and practices". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://sirsi.nps.navy.mil/uhtbin/hyperion-image/02Mar%5FMasiello.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Gong, Zitong. "Calibration of expensive computer models using engineering reliability methods". Thesis, University of Liverpool, 2018. http://livrepository.liverpool.ac.uk/3028587/.

Texto completo da fonte
Resumo:
The prediction ability of complex computer models (also known as simulators) relies on how well they are calibrated to experimental data. History Matching (HM) is a form of model calibration for computationally expensive models. HM sequentially cuts down the input space to find the fitting input domain that provides a reasonable match between model output and experimental data. A considerable number of simulator runs are required for typical model calibration. Hence, HM involves Bayesian emulation to reduce the cost of running the original model. Despite this, the generation of samples from the reduced domain at every iteration has remained an open and complex problem: current research has shown that the fitting input domain can be disconnected, with nontrivial topology, or be orders of magnitude smaller than the original input space. Analogous to a failure set in the context of engineering reliability analysis, this work proposes to use Subset Simulation - a widely used technique in engineering reliability computations and rare event simulation - to generate samples on the reduced input domain. Unlike Direct Monte Carlo, Subset Simulation progressively decomposes a rare event, which has a very small probability of occurrence, into sequential less rare nested events. The original Subset Simulation uses a Modified Metropolis algorithm to generate the conditional samples that belong to intermediate less rare events. This work also considers different Markov Chain Monte Carlo algorithms and compares their performance in the context of expensive model calibration. Numerical examples are provided to show the potential of the embedded Subset Simulation sampling schemes for HM. The 'climb-cruise engine matching' illustrates that the proposed HM using Subset Simulation can be applied to realistic engineering problems. Considering further improvements of the proposed method, a classification method is used to ensure that the emulation on each disconnected region gets updated. Uncertainty quantification of expert-estimated correlation matrices helps to identify a mathematically valid (positive semi-definite) correlation matrix between resulting inputs and observations. Further research is required to explicitly address the model discrepancy as well as to take the correlation between model outputs into account.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Farag, Reda, e Achintya Haldar. "A novel reliability evaluation method for large engineering systems". ELSEVIER SCIENCE BV, 2016. http://hdl.handle.net/10150/621495.

Texto completo da fonte
Resumo:
A novel reliability evaluation method for large nonlinear engineering systems excited by dynamic loading applied in time domain is presented. For this class of problems, the performance functions are expected to be function of time and implicit in nature. Available first-or second-order reliability method (FORM/SORM) will be challenging to estimate reliability of such systems. Because of its inefficiency, the classical Monte Carlo simulation (MCS) method also cannot be used for large nonlinear dynamic systems. In the proposed approach, only tens instead of hundreds or thousands of deterministic evaluations at intelligently selected points are used to extract the reliability information. A hybrid approach, consisting of the stochastic finite element method (SFEM) developed by the author and his research team using FORM, response surface method (RSM), an interpolation scheme, and advanced factorial schemes, is proposed. The method is clarified with the help of several numerical examples. (C) 2016 Faculty of Engineering, Ain Shams University. Production and hosting by Elsevier B.V.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Sun, Yong. "Reliability prediction of complex repairable systems : an engineering approach". Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16273/1/Yong_Sun_Thesis.pdf.

Texto completo da fonte
Resumo:
This research has developed several models and methodologies with the aim of improving the accuracy and applicability of reliability predictions for complex repairable systems. A repairable system is usually defined as one that will be repaired to recover its functions after each failure. Physical assets such as machines, buildings, vehicles are often repairable. Optimal maintenance strategies require the prediction of the reliability of complex repairable systems accurately. Numerous models and methods have been developed for predicting system reliability. After an extensive literature review, several limitations in the existing research and needs for future research have been identified. These include the follows: the need for an effective method to predict the reliability of an asset with multiple preventive maintenance intervals during its entire life span; the need for considering interactions among failures of components in a system; and the need for an effective method for predicting reliability with sparse or zero failure data. In this research, the Split System Approach (SSA), an Analytical Model for Interactive Failures (AMIF), the Extended SSA (ESSA) and the Proportional Covariate Model (PCM), were developed by the candidate to meet the needs identified previously, in an effective manner. These new methodologies/models are expected to rectify the identified limitations of current models and significantly improve the accuracy of the reliability prediction of existing models for repairable systems. The characteristics of the reliability of a system will alter after regular preventive maintenance. This alternation makes prediction of the reliability of complex repairable systems difficult, especially when the prediction covers a number of imperfect preventive maintenance actions over multiple intervals during the asset's lifetime. The SSA uses a new concept to address this issue effectively and splits a system into repaired and unrepaired parts virtually. SSA has been used to analyse system reliability at the component level and to address different states of a repairable system after single or multiple preventive maintenance activities over multiple intervals. The results obtained from this investigation demonstrate that SSA has an excellent ability to support the making of optimal asset preventive maintenance decisions over its whole life. It is noted that SSA, like most existing models, is based on the assumption that failures are independent of each other. This assumption is often unrealistic in industrial circumstances and may lead to unacceptable prediction errors. To ensure the accuracy of reliability prediction, interactive failures were considered. The concept of interactive failure presented in this thesis is a new variant of the definition of failure. The candidate has made several original contributions such as introducing and defining related concepts and terminologies, developing a model to analyse interactive failures quantitatively and revealing that interactive failure can be either stable or unstable. The research results effectively assist in avoiding unstable interactive relationship in machinery during its design phase. This research on interactive failures pioneers a new area of reliability prediction and enables the estimation of failure probabilities more precisely. ESSA was developed through an integration of SSA and AMIF. ESSA is the first effective method to address the reliability prediction of systems with interactive failures and with multiple preventive maintenance actions over multiple intervals. It enhances the capability of SSA and AMIF. PCM was developed to further enhance the capability of the above methodologies/models. It addresses the issue of reliability prediction using both failure data and condition data. The philosophy and procedure of PCM are different from existing models such as the Proportional Hazard Model (PHM). PCM has been used successfully to investigate the hazard of gearboxes and truck engines. The candidate demonstrated that PCM had several unique features: 1) it automatically tracks the changing characteristics of the hazard of a system using symptom indicators; 2) it estimates the hazard of a system using symptom indicators without historical failure data; 3) it reduces the influence of fluctuations in condition monitoring data on hazard estimation. These newly developed methodologies/models have been verified using simulations, industrial case studies and laboratory experiments. The research outcomes of this research are expected to enrich the body of knowledge in reliability prediction through effectively addressing some limitations of existing models and exploring the area of interactive failures.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Sun, Yong. "Reliability prediction of complex repairable systems : an engineering approach". Queensland University of Technology, 2006. http://eprints.qut.edu.au/16273/.

Texto completo da fonte
Resumo:
This research has developed several models and methodologies with the aim of improving the accuracy and applicability of reliability predictions for complex repairable systems. A repairable system is usually defined as one that will be repaired to recover its functions after each failure. Physical assets such as machines, buildings, vehicles are often repairable. Optimal maintenance strategies require the prediction of the reliability of complex repairable systems accurately. Numerous models and methods have been developed for predicting system reliability. After an extensive literature review, several limitations in the existing research and needs for future research have been identified. These include the follows: the need for an effective method to predict the reliability of an asset with multiple preventive maintenance intervals during its entire life span; the need for considering interactions among failures of components in a system; and the need for an effective method for predicting reliability with sparse or zero failure data. In this research, the Split System Approach (SSA), an Analytical Model for Interactive Failures (AMIF), the Extended SSA (ESSA) and the Proportional Covariate Model (PCM), were developed by the candidate to meet the needs identified previously, in an effective manner. These new methodologies/models are expected to rectify the identified limitations of current models and significantly improve the accuracy of the reliability prediction of existing models for repairable systems. The characteristics of the reliability of a system will alter after regular preventive maintenance. This alternation makes prediction of the reliability of complex repairable systems difficult, especially when the prediction covers a number of imperfect preventive maintenance actions over multiple intervals during the asset's lifetime. The SSA uses a new concept to address this issue effectively and splits a system into repaired and unrepaired parts virtually. SSA has been used to analyse system reliability at the component level and to address different states of a repairable system after single or multiple preventive maintenance activities over multiple intervals. The results obtained from this investigation demonstrate that SSA has an excellent ability to support the making of optimal asset preventive maintenance decisions over its whole life. It is noted that SSA, like most existing models, is based on the assumption that failures are independent of each other. This assumption is often unrealistic in industrial circumstances and may lead to unacceptable prediction errors. To ensure the accuracy of reliability prediction, interactive failures were considered. The concept of interactive failure presented in this thesis is a new variant of the definition of failure. The candidate has made several original contributions such as introducing and defining related concepts and terminologies, developing a model to analyse interactive failures quantitatively and revealing that interactive failure can be either stable or unstable. The research results effectively assist in avoiding unstable interactive relationship in machinery during its design phase. This research on interactive failures pioneers a new area of reliability prediction and enables the estimation of failure probabilities more precisely. ESSA was developed through an integration of SSA and AMIF. ESSA is the first effective method to address the reliability prediction of systems with interactive failures and with multiple preventive maintenance actions over multiple intervals. It enhances the capability of SSA and AMIF. PCM was developed to further enhance the capability of the above methodologies/models. It addresses the issue of reliability prediction using both failure data and condition data. The philosophy and procedure of PCM are different from existing models such as the Proportional Hazard Model (PHM). PCM has been used successfully to investigate the hazard of gearboxes and truck engines. The candidate demonstrated that PCM had several unique features: 1) it automatically tracks the changing characteristics of the hazard of a system using symptom indicators; 2) it estimates the hazard of a system using symptom indicators without historical failure data; 3) it reduces the influence of fluctuations in condition monitoring data on hazard estimation. These newly developed methodologies/models have been verified using simulations, industrial case studies and laboratory experiments. The research outcomes of this research are expected to enrich the body of knowledge in reliability prediction through effectively addressing some limitations of existing models and exploring the area of interactive failures.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Mwanga, Alifas Yeko. "Reliability modelling of complex systems". Thesis, Pretoria : [s.n.], 2006. http://upetd.up.ac.za/thesis/available/etd-12142006-121528.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Er, Kim Hua. "Analysis of the reliability disparity and reliability growth analysis of a combat system using AMSAA extended reliability growth models". Thesis, Monterey, California. Naval Postgraduate School, 2005. http://hdl.handle.net/10945/1788.

Texto completo da fonte
Resumo:
The first part of this thesis aims to identify and analyze what aspects of the MIL-HDBK-217 prediction model are causing the large variation between prediction and field reliability. The key findings of the literature research suggest that the main reason for the inaccuracy in prediction is because of the constant failure rate assumption used in MIL-HDBK-217 is usually not applicable. Secondly, even if the constant failure rate assumption is applicable, the disparity may still exist in the presence of design and quality related problems in new systems. A possible solution is to apply reliability growth testing (RGT) to new systems during the development phase in an attempt to remove these design deficiencies so that the system's reliability will grow and approach the predicted value. In view of the importance of RGT in minimizing the disparity, this thesis provides a detailed application of the AMSAA Extended Reliability Growth Models to the reliability growth analysis of a combat system. It shows how program managers can analyze test data using commercial software to estimate the system demonstrated reliability and the increased in reliability due to delayed fixes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Jeong, Han Koo. "Reliability of laminated composite plates". Thesis, University of Southampton, 1999. https://eprints.soton.ac.uk/21869/.

Texto completo da fonte
Resumo:
This thesis deals with reliability analysis of laminated composite plates subjected to transverse lateral pressure loads. Input parameters to strengths of the plates such as applied transverse lateral pressure loads, elastic moduli, geometric and ultimate strength values of the plates are treated as basic design variables, and specific probability distributions are applied to them to take into account the variability nature of these basic design variables. Based on the statistical information on the basic design variables, these variables are pseudo-randomly generated in accordance with the corresponding probability distributions by using statistical sampling techniques. Generated random values of the basic design variables corresponding to the applied loads, elastic moduli and geometric values are substituted into various laminated plate theories which can accommodate different lamination schemes and boundary conditions to assess the probabilistic strengths of the plates. The limit state equations are developed by using maximum stress, maximum strain, Tsai-Hill, Tsai-Wu, Hoffman and Azzi-Tsai-Hill failure criteria. Calculated probabilistic plate strengths and generated random values of the ultimate strength basic design variables of the plates are substituted into the developed limit state equations to define the failure or survival state of the plates. In solving the limit state equations, structural reliability techniques are adopted and evolved appropriately for the reliability analysis of the plates. Developed reliability analysing algorithms are applied to laminated plates from experiment to check its validity. Finally, the EUROCOMP Design Code is compared with the developed reliability analysis procedures by applying the both approaches to the strengths of laminated plates.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Korssell, Christine, e Angelica Waernlund. "Analysis of Disconnection Circuit Breaker Reliability". Thesis, KTH, Skolan för elektro- och systemteknik (EES), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-214749.

Texto completo da fonte
Resumo:
For the modern human, electricity has become anessential part of life. It is first when the power goes out that societystops and the need of electricity becomes obvious. The largestpower failures have been caused by broken devices in power grids.This project examined different ways to analyze the reliability, lifespan and number of failures over time for the device disconnectingcircuit breaker in power grids.Given data for the circuit breakers has been sorted intodifferent files and the main method used, was linear regression,which were applied on the sorted data. The result showed that thevalues of the circuit breakers deteriorated over time, and this willbe presented in more detail in this report. It can be concluded thatthe disconnecting circuit breakers ages like expected, except forone parameter that gave unexpected result.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Elfashny, Kamal 1960. "Reliability analysis of telecommunication towers". Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=22649.

Texto completo da fonte
Resumo:
The reliability analysis of telecommunication networks requires an estimate of the probability of failure of antenna-supporting structures. Lacking such estimates, the network planners tend to assume that the probability of failure of towers is negligible. On the other hand, reliability concepts implicit in the codes are not always compatible with those used in reliability analysis of the network, in particular, the implicit probabilities associated with partial load and resistance factors are obtained for idealized structural members and do not address the reliability of the structure as a system. Advances in structural reliability, combined with more extensive climatologic data, can be used to implement a probabilistc approach for the design of towers.
The objective of this study is to propose a procedure for calculating the probability of mechanical failure of self-supported telecommunication towers. The procedure introduces the concept of calculating the conditional probability of failure which can be used with different joint distributions of wind and ice with a minimum of computations. As an example, the methodology is applied to the CEBJ tower in James Bay. The structure is assumed to behave linearly and to be statically determinate. In consequence, the structure can be modelled as a weakest link model.
The study demonstrates the possibility of estimating the probability of failure for the whole structure using a rational approach. The critical members of the structure and the relative importance of each of the design parameters with respect to the probability of failure are identified in order to simplify the reliability analysis. The probability of failure is most sensitive with respect to the joint probability distribution function of wind speed and ice thickness. Upper and lower bound estimates of the probability of failure are presented for different assumption in the joint distribution. These results indicate the need for a better model for the environmental loads.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Lin, Daming. "Reliability growth models and reliability acceptance sampling plans from a Bayesian viewpoint /". Hong Kong : University of Hong Kong, 1995. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13999618.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Brophy, Dennis J. O'Leary James D. "Software evaluation for developing software reliability engineering and metrics models /". Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1999. http://handle.dtic.mil/100.2/ADA361889.

Texto completo da fonte
Resumo:
Thesis (M.S. in Information Technology Management) Naval Postgraduate School, March 1999.
"March 1999". Thesis advisor(s): Norman F. Schneidewind, Douglas Brinkley. Includes bibliographical references (p. 59-60). Also available online.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Li, Hong. "An inverse reliability method and its applications in engineering design". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0026/NQ38929.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Koduru, Smitha Devi. "Performance-based earthquake engineering with the first-order reliability method". Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/883.

Texto completo da fonte
Resumo:
Performance-based earthquake engineering is an emerging field of study that complements the prescriptive methods that the design codes provide to ensure adequate seismic performance of structures. Accounting for uncertainties in the performance assessments forms an important component in this area. In this context, the present study focuses on two broad themes; first, treatment of uncertainties and the application of the first-order reliability method (FORM) in finite-element reliability analysis, and second, the seismic risk assessment of reinforced concrete structures for performance states such as, collapse and monetary loss. In the first area, the uncertainties arising from inherent randomness (“aleatory uncertainty”) and due to the lack of knowledge (“epistemic uncertainty”) are identified. A framework for the separation of these uncertainties is proposed. Following this, the applicability of FORM to the linear and nonlinear finite-element structural models under static and dynamic loading is investigated. The case studies indicate that FORM is applicable for linear and nonlinear static problems. Strategies are proposed to circumvent and remedy potential challenges to FORM. In the case of dynamic problems, the application of FORM is studied with an emphasis on cumulative response measures. The limit-state surface is shown to have a closed and nonlinear geometric shape. Solution methods are proposed to obtain probability bounds based on the FORM results. In the application-oriented second area of research, at first, the probability of collapse of a reinforced concrete frame is assessed with nonlinear static analysis. By modelling the post-failure behaviour of individual structural members, the global response of the structure is estimated beyond the component failures. The final application is the probabilistic assessment of monetary loss for a high-rise shear wall building due to the seismic hazard in the Cascadia subduction zone. A 3-dimensional finite-element model of the structure with nonlinear material models is subjected to stochastic ground motions in the reliability analysis. The parameters for the stochastic ground motion model are developed for Vancouver, Canada. Monetary losses due to the damage of structural and non-structural components are included.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Brophy, Dennis J., e James D. O'Leary. "Software evaluation for developing software reliability engineering and metrics models". Thesis, Monterey, California ; Naval Postgraduate School, 1999. http://hdl.handle.net/10945/13581.

Texto completo da fonte
Resumo:
Today's software is extremely complex, often constituting millions of lines of instructions. Programs are expected to operate smoothly on a wide variety of platforms. There are continuous attempts to try to assess what the reliability of a software package is and to predict what the reliability of software under development will be. The quantitative aspects of these assessments deal with evaluating, characterizing and predicting how well software will operate. Experience has shown that it is extremely difficult to make something as large and complex as modern software and predict with any accuracy how it is going to behave in the field. This thesis proposes to create an integrated system to predict software reliability for mission critical systems. This will be accomplished by developing a flexible DBMS to track failures and to integrate the DBMS with statistical analysis programs and software reliability prediction tools that are used to make calculations and display trend analysis. It further proposes a software metrics model for fault prediction by determining and manipulating metrics extracted from the code.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Blake, Etoile Saint-Melson. "Computer aided techniques for the reliability assessment of engineering systems". Thesis, London South Bank University, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.279708.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Moyer, Gordon Stanley 1961. "AN EXPERT SYSTEM FOR FAILURE MODE INVESTIGATION IN RELIABILITY ENGINEERING". Thesis, The University of Arizona, 1986. http://hdl.handle.net/10150/277237.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Paniagua, Sánchez-Mateos Jesús. "Reliability-Constrained Microgrid Design". Thesis, KTH, Skolan för elektro- och systemteknik (EES), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-187715.

Texto completo da fonte
Resumo:
Microgrids are new challenging power systems under development. This report presents a feasibility study of microgrid development. This is an essential task before implementing microgrid systems. It is extremely important to know the number and size of distributed energy resources (DERs) needed and it is necessary to compare investment costs with benefits in order to evaluate the profitability of microgrids. Under the assumption that a large number of DERs improves the reliability of microgrids an optimization problem is formulated to get the accurate mix of distributed energy resources. Uncertainty in physical and financial parameters is taken into account to model the problem considering different scenarios.  Uncertainty takes place in load demanded, renewable energy generation and electricity market price forecasts, availability of distributed energy resources and the microgrid islanding. It is modeled in a stochastic way. The optimization problem is formulated firstly as a mixed-integer programming solved via branch and bound and then it is improved formulating a two stage problem using Benders’ Decomposition which shortens the problem resolution. This optimization problem is divided in a long-term investment master problem and a short-term operation subproblem and it is solved iteratively until it reaches convergence. Bender’s Decomposition optimization problem is applied to real data from the Illinois Institute of Technology (IIT) and it gives the ideal mix of distributed energy resources for different uncertainty scenarios. These distributed energy resources are selected from an initial set. It proves the usefulness of this optimization technique which can be also applied to different microgrids and data. The different solutions obtained for different scenarios are explained and analyzed. They show the possibility of microgrid implementation and determine the most favorable scenarios to reach the microgrid implementation successfully.  Reliability is a term highly linked to the microgrid concept and one of the most important reasons of microgrid development. Thus an analysis of reliability importance is implemented using the importance index of interruption cost (  ) in order to measure the reliability improvement of developing microgrids. It shows and quantifies the reliability improvement in the system.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Alali, Dawood. "Probabilistic reliability assessment of transmission systems". Thesis, Cardiff University, 2015. http://orca.cf.ac.uk/88271/.

Texto completo da fonte
Resumo:
Power system reliability is defined as the ability of a power system to perform its function of maintaining supply without allowing network variables (e.g. voltage, component loading and frequency) to stray too far from the standard ranges. Traditionally over many decades, reliability has been assessed using deterministic criteria, e.g., ‘N-1’ or ‘N-2’ standards under prescribed severe system demand levels. However, using the so-called worst-case deterministic approach does not provide explicitly an assessment of the probability of failure of the component or system, and the likelihood of the outages is treated equally. On the other hand, a probabilistic security assessment may offer advantages by considering (i) a statistical description of the performance of the system together with (ii) the application of historical fault statistics that provide a measure of the probability of faults leading to component or system outages. The electrical transmission system, like other systems, is concerned with reducing different risks and costs to within acceptable limits. Therefore, a more precise algorithm of a probabilistic reliability assessment of electrical transmission systems offers an opportunity to achieve such efficiency. This research work introduces the concept of applying the Line Overloading Risk Index (LORI) to assess one of the risks to transmission systems, namely, line overloading. Line failure or outage due to line overloading is catastrophic; they may lead to either load interruptions or system blackout. Some recent studies have focused on the assessment of the LORI; however, such research has been restricted to the analysis of system with very few intermediate demand levels and an assumed constant line thermal rating. This research work aims to extend the evaluation of the LORI through a comprehensive evaluation of transmission system performance under hour-by-hour system demand levels over a oneyear period, for intact systems, as well as ‘N-1’, ‘N-2’. In addition, probable hourly line thermal ratings have also been evaluated and considered over an annual cycle based on detailed meteorological data. In order to accomplish a detailed analysis of the system reliability, engineering data and historical line fault and maintenance data in real transmission systems were employed. The proposed improved probabilistic reliability assessment method was evaluated using a software package, namely, NEPLAN, thus making it possible to simulate different probable load flow cases instead of assuming a single ‘worst case scenario’. An automated process function in NEPLAN was developed using an extensive programming code in order to expedite the load flow modelling, simulation and result reporting. The successful use of the automation process to create multiple models and apply different contingencies, has made possible this probabilistic study which would not have been possible using a ‘manual’ simulation process. When calculating the LORI, the development of a Probabilistic Distribution Function (PDF) for line loading, line thermal rating and system demand was essential and useful. The developed algorithm takes into consideration the likelihood of events occurring in addition to severity, which offers opportunity for more efficient planning and operation of transmission systems. Study cases performed on real electric transmission systems in Dubai and the GB have demonstrated that the developed algorithm has potential as a useful tool in system planning and operation. The research presented in this thesis offers an improved algorithm of probabilistic reliability assessment for transmission systems. The selected index, along with the developed algorithm, can be used to rank the transmission lines based on the probabilistic line overloading risk. It provides valuable information on the degree of line overloading vulnerability for different uncertainties.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Braden, Derek Richard. "Non-destructive evaluation of solder joint reliability". Thesis, Liverpool John Moores University, 2012. http://researchonline.ljmu.ac.uk/6124/.

Texto completo da fonte
Resumo:
A through life non-destructive evaluation technique is presented in which a key solder joint feature, nucleating at the bump to silicon interface and propagating across a laminar crack plane is captured and tracked using acoustic microscopy imaging (AMI). The feasibility of this concept was successfully demonstrated by employing the measurement technique in combination with Finite Element Analysis (FEA) to study the impact of component floor plan layout on the reliability of electronics systems subjected to thermal cycling. A comprehensive review of current and emerging packaging and interconnect technologies has shown increasingly a move from conventional 2D to 3D packaging. These present new challenges for reliability and Non Destructive Evaluation (NDE) due to solder joints being hidden beneath the packaging, and not ordinarily visible or accessible for inspection. Solutions are developed using non-destructive testing (NDT) techniques that have the potential to detect and locate defects in microelectronic devices. This thesis reports on X-ray and Acoustic Micro Imaging (AMI) which have complementary image discriminating features. Gap type defects are hard to find using X-ray alone due to low contrast and spot size resolution, whereas AMI having better axial resolution has allowed cracks and delamination at closely spaced interfaces to be investigated. The application of AMI to the study of through life solder joint behaviour has been achieved for the first time. Finite Element Analysis and AMI performance were compared to measure solder joint reliability for several realistic test cases. AMI images were taken at regular intervals to monitor through- life behaviour. Image processing techniques were used to extract a diameter measurement for a laminar crack plane, within a solder joint damage region occurring at the bump to silicon interface. FEA solder joint reliability simulations for flip-chip and micro-BGA (mBGA) packages placed on FR4 PCB's were compared to the AMI measurement performance, with a reasonable level of correlation observed. Both techniques clearly showed significant reliability degradation of the critical solder joints located furthest from the neutral axis of the package, typically residing at the package corners. The technique also confirmed that circuit board thickness can affect interconnect reliability, as can floor plan. Improved correlation to the real world environment was achieved when simulation models considered the entire floor plan layout and constraints imposed on the circuit board assembly. This thesis established a novel through life solder joint evaluation method crucial to the development of better physics of failure models and the advancement of model based prognostics in electronics systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Dalla, Valle Paola. "Reliability in pavement design". Thesis, University of Nottingham, 2015. http://eprints.nottingham.ac.uk/28999/.

Texto completo da fonte
Resumo:
This research presents a methodology that accounts for variability of key pavement design input variables and variations due to lack-of-fit of the design models and assesses effects on pavement performance (fatigue and deformation life). Variability is described by statistical terms such as mean and standard deviation and by its probability density distribution. The subject of reliability in pavement design has pushed many highway organisations around the world to review their design methodologies to evaluate the effect of variations in materials on pavement performance. This research has reinforced this need for considering the variability of design parameters in the design procedure and to conceive a pavement system in a probabilistic way, similar to structural designs. This study has only considered flexible pavements. The sites considered for the analysis, all in the UK (including Northern Ireland), were mainly motorways or major trunk roads. Pavement survey data analysed were for Lane 1, the most heavily trafficked lane. Sections 1km long were considered wherever possible. Statistical characterisation of the variation of layer thickness, asphalt stiffness and subgrade stiffness input parameters is addressed. A model is then proposed which represents an improvement on the Method of Equivalent Thickness for the calculation of strains and life for flexible pavements. The output is a statistical assessment of the estimated pavement performance. The proposed model to calculate the fatigue and deformation life is very fast and simple, and is well suited to use in a pavement management system where stresses and strains must be calculated millions of times. The research shows that the parameters with the greatest influence on the variability of predicted fatigue performance are the asphalt stiffness modulus and thickness. The parameters with the greatest influence on the variability of predicted deformation performance are the granular subbase thickness, the asphalt thickness and the subgrade stiffness.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Games, A. M. "Some aspects of common cause failure analysis in engineering systems". Thesis, University of Liverpool, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.383417.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Sa, Yingshi 1965. "Reliability analysis of electric distribution lines". Thesis, McGill University, 2002. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=29546.

Texto completo da fonte
Resumo:
Wood Poles are extensively used in North America as supports for electric distribution lines. On average, wood poles have a service life of 40 years with a replacement cost of approximately $2000. Since the distribution network is of relatively recent construction, maintenance and replacement costs have been relatively small compared to the total number of poles in service.
The goal of this thesis is to use the FORM/SORM algorithm to evaluate the reliability of a single pole and the results obtained when applied to a sample of 887 wood poles inspected in the field. The procedure was also applied to a sample of poles designed according to the current codes in order to calibrate the evaluation procedure. The results indicate that the proposed procedure will improve the current maintenance and replacement strategy by guarantying a more uniform level of reliability throughout the network and by decreasing by up to 33% the number of wood pole replacements. (Abstract shortened by UMI.)
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Kalantarnia, Maryam. "Reliability analysis of spillway gate systems". Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=123191.

Texto completo da fonte
Resumo:
The goal of this research is to develop a methodology to accurately determine the reliability of spillway gate systems particularly for spillways that experience harsh environmental conditions and prolonged periods of dormancy. The significance of this study lies in the fact that spillways are rarely in use and remain inactive for most of their service life. Components of emergency spillway gate systems spend the majority of their service life in a dormant state and are activated only during emergencies such as floods or load rejection or on a regular basis for inspection and testing. Also, most spillways are located in remote areas and are subjected to severe environmental conditions which can cause early degradation of components. Furthermore, components of old spillway gate systems are often custom made with no readily available spare parts and little information on the reliability of existing components. These characteristics are very different from those of the equipment used in an industrial setting making it difficult for traditional methods to deliver accurate estimates on the reliability of such systems. Therefore, the development of a methodology that is customized to such conditions and incorporates unique parameters and state-of-the-art reliability techniques can contribute greatly to the dam industry by ensuring the safe operation of spillway systems on demand. The first step in this approach is geared towards system modeling in which a reliability model is developed for the spillway gate system taking into account all components, their relative interactions, latent failures due to dormancy, environmental conditions and type and frequency of inspections and tests. The next step is to develop a quantitative approach to update the availability of the spillway gate system based on real time conditions after each inspection. In this step, a Condition Indexing (CI) approach is combined with dormant availability analysis to evaluate the changes in the state of the system in real time using CI data obtained at each inspection. This approach provides a tool for dam owners to convert qualitative and descriptive results obtained from inspections to an index used as a comparative measure to detect real time changes in the availability of spillway gate systems. Next, inspection and testing procedures of spillway gate systems are investigated to evaluate the effect of different types and frequencies on the reliability of various types of components and the entire system. Lastly, the optimum inspection and testing strategy is determined, minimizing system costs including costs related to inspection and testing and the consequences of failure while at the same time maintaining the availability of the spillway gate system above a predefined limit. Genetic algorithm and Creeping Random Search are used to solve this optimization problem. Using these methods the optimum interval for each type of test is determined and the minimum system cost is calculated based on the optimum intervals.This methodology is used to develop a software application that incorporates all of the above steps into a user friendly program. This software application has been developed for availability analysis of spillway systems and allows users to model complex systems, add inspection, tests and component replacement options to the system, determine the availability of the system as a function of service life and identify the optimum inspection and testing period based on unavailability limits and costs of inspections/tests vs. consequence of failure. This program can be used as a tool by dam owners to accurately determine the availability of custom spillways and to select optimal inspection and testing plans that contribute most to increase the availability of the system.
Le but de cette recherche est de développer une méthodologie pour déterminer avec précision la fiabilité des évacuateurs de barrages en particulier pour les évacuateurs qui sont exposés à des conditions environnementales extrêmes et sont sujets à de longues périodes d'inactivité. L'importance de cette étude réside dans le fait que les évacuateurs sont rarement utilisés et demeurent inactifs pendant la majeure partie de leur durée de vie. Les composants des évacuateurs d'urgence passent la majorité de leur durée de vie dans un état de dormance et ne sont activés que lors de situations d'urgence telles que les inondations ou le rejet de la charge ou sur une base régulière pour l'inspection et des tests. En plus, la plupart des évacuateurs sont situés dans des régions avec accès limité et sont soumis à des conditions environnementales extrêmes qui peuvent causer une dégradation rapide des composants. En outre, les composants de vieux évacuateurs sont souvent fabriqués sur mesure, sans pièces de rechange facilement disponibles et peu d'informations sont disponibles sur leur fiabilité. Ces caractéristiques sont très différentes de celles des équipements utilisés dans un milieu industriel ce qui rend difficile l'application des méthodes d'analyse conventionnelles pour estimer la fiabilité de ces systèmes. Par conséquent, le développement d'une méthodologie qui est adaptée à ces conditions peut grandement contribuer à améliorer la sécurité de fonctionnement des évacuateurs sur demande.Cette étude vise à élaborer des procédures d'analyse de fiabilité qui considèrent les différentes fonctions et caractéristiques d'un évacuateur, y compris tous les composants électriques, mécaniques et structuraux. L'un des principaux défis dans cette évaluation est d'obtenir des estimations réalistes de la fiabilité de chaque composant. La première étape de cette approche est la modélisation du système en tenant compte de tous les composants, de leurs interactions, des défaillances latentes en période d'inactivité, des conditions environnementales, du type et de la fréquence des inspections et des essais. Une approche quantitative a été développée afin de mettre à jour la disponibilité des évacuateurs en fonction de l'état des composants suite à une inspection. Dans cette approche, une évaluation du niveau de fiabilité des composants est obtenue en fonction d'un diagnostic basé sur des observations qualitatives et quantitatives recueillies lors des inspections. Le modèle utilise cette information et intègre un modèle de détérioration afin de prédire la disponibilité des évacuateurs.Finalement, les procédures d'inspection et d'essais sur les évacuateurs sont étudiées pour évaluer leur effet sur la fiabilité en fonction de leurs caractéristiques, de leur efficacité et de leur fréquence pour les différents types de composants et l'ensemble du système. Enfin, une stratégie optimale pour les inspections et les essais est déterminée en minimisant une fonction de coûts qui intègre les coûts liés aux essais et inspections et les conséquences d'une défaillance et en respectant une norme minimale de fiabilité. Les algorithmes d'optimisation basés sur algorithme génétique et la recherche aléatoire sont utilisés pour résoudre ce problème. En utilisant ces méthodes, les fréquences optimales sont déterminées pour chaque type d'essai.Cette méthode est utilisée pour développer un logiciel qui intègre toutes les étapes ci-dessus. Ce logiciel a été développé spécifiquement pour l'analyse de la disponibilité des évacuateurs en utilisant la programmation orientée par objet et permet aux utilisateurs de modéliser des systèmes complexes, ajouter les inspections, les essais et les options de remplacement de composants du système, déterminer la disponibilité du système en fonction de la durée de vie, et identifier les fréquences d'inspection et d'essai optimales.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Bentil, Joseph Kingsley Attom. "Improving Buffalo City's sub-transmission reliability". Master's thesis, University of Cape Town, 2016. http://hdl.handle.net/11427/23388.

Texto completo da fonte
Resumo:
Several experiences of sudden large scale disruptions in electrical service deeply impacted both social stability and economic development in affected communities and caused lower levels of reliability performance. The prevention of such catastrophic incidents poses huge challenges for reliability study and operational practices in Buffalo City's sub-transmission network. Primarily investigations on the field shows, aging infrastructure, relay failures and reactive maintenance practice is eminent. Inspired by these challenges, this dissertation proposes an analysis of the critical transition in sub-transmission network from a lower level of reliability to an economically and acceptable level of reliability. The transition of the operational "stress" and its large scale of power interruptions are studied. The transition of the existing sub-transmission to the alternative sub-transmission models has been presented. The analysis of load flow and fault level calculations identifies the loading trends critical to cause operational "pressure" of unplanned interruptions. The results in this research work had to discover the most appropriate resolutions to aging equipment, reactive maintenance and protection systems. The DIgSILENT Power Factory simulates and quantifies the results of the problems that could occur in the sub-transmission network in the immediate future. Measures to mitigate any occurrence which might cause more prone to a catastrophic blackout are presented. The proposed corrective measures of upkeep aging infrastructure, relay responsiveness and planned preventative maintenance have been recommended. The development of these corrective measures and the proposed network model is the key to reaching higher levels of reliability performance in the energy supply that communities require in Buffalo City Metropolitan Municipality.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Huang, Wei. "Reliability analysis considering product performance degradation". Diss., The University of Arizona, 2002. http://hdl.handle.net/10150/279991.

Texto completo da fonte
Resumo:
This dissertation presents a statistical model and analysis procedure for product performance aging degradation data. This model takes into account the strictly increasing/decreasing nature of performance measurements at multiple observation times. Maximum likelihood estimation (MLE) is used to estimate the time varying parameters of the proposed statistical model. The analysis of both generated data and field data is presented. To demonstrate product reliability under aging, an analysis of surface mounted solder joints due to thermal fatigue is included in the dissertation. This analysis was done by first examining published life test data and then identifying the intermetallic compound (IMC) thickness randomness. Results indicate that the IMC layer thickness randomness may have significant influence on the Mean Time To Failure (MTTF) and the reliability at high thermal cycles. The analysis of products with competing hard and soft failure modes is presented in terms of distribution independence. Derivation and examples are included for the event when the product finally fails in a specific failure mode. Finally, an improved strength-stress interference (SSI) reliability model is derived for analyzing a more general engineering degradation problem. This model incorporates both stochastic strength aging degradation and the stochastic loading force directed at the product. Statistical inference for simple stochastic processes and numerical examples are analyzed and discussed to verify the model.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

林達明 e Daming Lin. "Reliability growth models and reliability acceptance sampling plans from a Bayesian viewpoint". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B3123429X.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Brand, W. W. (Willem Wouter). "Reliability assessment of a prestressed concrete member". Thesis, Stellenbosch : Stellenbosch University, 2001. http://hdl.handle.net/10019.1/52430.

Texto completo da fonte
Resumo:
Thesis (MScEng)--University of Stellenbosch, 2001.
ENGLISH ABSTRACT: First-order second-moment structural reliability methods are used to assess the reliability of a prestressed concrete beam. This beam was designed for imposed office floor loads and partitions following the limit states design method as provided for by the applicable South African structural codes, viz SABS 0100-1:1992 and SABS 0160:1989. The reliability is examined at two limit states. At the ultimate limit state of flexure the ultimate moment of resistance must exceed the applied external moment at the critical section, while at the serviceability limit state of deflection the deflection must satisfy the codespecified deflection criteria. Realistic theoretical models are selected to express the flexural strength and deflection of the prestressed concrete member, while appropriate probabilistic models are gathered from the literature for loading, resistance and modelling uncertainties. The calculated reliability index at the ultimate limit state of flexure (3.10) is lower than expected in view of the fact that this represents a non-critical limit state in the case of a Class 2 prestressed concrete member. This condition can be explained with reference to the relatively high uncertainty associated with the modelling error for flexural strength. The calculated reliability index at the serviceability limit state of deflection (l.67) compares well with acceptable practice. The study further focuses on the sensitivity of the reliability at the two limit states of interest to uncertainty in the various design parameters. The ultimate limit state of flexure is dominated by the uncertainty associated with the modelling error for flexural strength, while the contribution to the overall uncertainty of the ultimate strength and area of the prestressing steel and the effective depth is less significant. In comparison the reliability at the serviceability limit state of deflection is not dominated by the uncertainty associated with a single basic variable. Instead, the uncertainty associated with the modelling error, creep factor and prestress loss factor are all significant. It was also demonstrated that the variability in beam stiffness is not a major source of uncertainty in the case of a Class 2 prestressed concrete member. It is recommended that the present code provisions for ultimate strength and deflection should be reviewed to formulate theoretical models with reduced systematic and random errors. The effect of the uncertainty associated with the creep and prestressed loss factors should also be adressed by adjustment of the partial material factor for concrete at the serviceability limit state of deflection. Furthermore, research must be directed towards formulating an objective failure criterion for deflection. The uncertainty in the deflection limit must therefore be quantified with a probability distribution.
AFRIKAANSE OPSOMMING: Eerste-orde tweede-moment struktuur betroubaarheid metodes word ingespan om die betroubaarheid van 'n voorspanbeton balk te bereken. Hierdie balk is ontwerp vir opgelegte kantoor vloerbelasting en partisies volgens die grenstoestand ontwerp metode soos beskryf in die toepaslike Suid-Afrikaanse boukodes, naamlik SABS 0100-1: 1992 en SABS 0160: 1989. Die betroubaarheid word ondersoek by twee grenstoestande. By die swiglimiet van buiging moet die weerstandsmoment die eksterne aangewende moment oorskrei by die kritieke balksnit, terwyl die defleksie die kriteria soos voorgeskryf deur die kode moet bevredig by die dienslimiet van defleksie. Realistiese teoretiese modelle word gebruik om die buigsterkte en defleksie van die voorspanbeton balk te bereken. Verder is geskikte waarskynlikheid modelle uit die literatuur versamelom die belasting, weerstand en modelonsekerhede te karakteriseer. Die betroubaarheid indeks soos bereken vir die swiglimiet van buiging (3.10) is laer as wat verwag sou word in die lig van die feit dat hierdie nie 'n kritieke grenstoestand verteenwoordig in die geval van 'n Klas 2 voorspan element nie. Dit kan verklaar word met verwysing na die relatiewe groot onsekerheid wat geassosieer word met die modellering fout vir buigsterkte. Die berekende betroubaarheid indeks vir die dienslimiet van defleksie (1.67) vergelyk goed met aanvaarde praktyk. Die studie fokus verder op die sensitiwiteit van die betroubaarheid by die twee grenstoestande onder beskouing ten opsigte van die onsekerheid in die verskillende ontwerp parameters. By die swiglimiet van buiging word die onsekerheid oorheers deur die bydrae van die modelering fout vir buigsterkte. Die bydraes tot die totale onsekerheid deur die swigsterkte en area van die voorspanstaal sowel as die effektiewe diepte is minder belangrik. By die dienslimiet van defleksie word die betroubaarheid nie oorheers deur die onsekerheid van 'n enkele basiese veranderlike nie. In stede hiervan is die onsekerheid van die modellerings fout, kruipfaktor en voorspan verliesfaktor almal noemenswaardig. Daar word verder aangetoon dat die veranderlikheid in balkstyfheid nie 'n belangrike bron van onsekerheid in die geval van 'n Klas 2 voorspan element is nie. Daar word aanbeveel dat die bestaande voorskrifte in die kode vir buigsterkte en defleksie aangespreek moet word deur teoretiese modelle met klein modelonsekerhede te formuleer. Die uitwerking van die onsekerheid van die kruip- en voorspan verliesfaktore kan aangespreek word deur 'n aanpassing te maak in die parsiële materiaalfaktor vir beton in die geval van die dienslimiet van defleksie. Navorsing moet verder daarop gemik wees om 'n objektiewe falingskriterium vir defleksie te formuleer. Die onsekerheid van die toelaatbare defleksie moet dus gekwatifiseer word deur 'n waarskynlikheidsverdeling.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Hassett, Thomas Francis. "Availability and reliability engineering design considerations for assembly line manufacturing systems". Diss., The University of Arizona, 1990. http://hdl.handle.net/10150/185311.

Texto completo da fonte
Resumo:
Manufacturing facilities employ various types of transfer lines and networks with workstations and buffers. This approach promotes the production and fabrication of multicomponent equipments and systems. Analysis of these lines requires the application of discrete time Markov Chain methods. These methods when computerized present certain problems concerning the data storage of large sparse transition matrices. Repetitive multiplication techniques were used to provide the general Markov Chain solution for a series transfer line. These solutions were then computerized to evaluate the series line's availability trajectory. The limiting (leveling off) point for each trajectory provided the steady state availability. From these solutions the work then focuses on the development of new computer algorithms for the series transfer line configuration. These algorithms employ advanced techniques to minimize the storage of large sparse vectors and matices while maintaining relatively fast computational times. The algorithms rely on the line's transition matrix decomposition via graph theoretic methods. A set of library functions written in the C language were specially written to manipulate the Markov Chain matrix and vector data. An extensive set of results were analyzed for the three and four workstation series transfer lines. This analysis employed linear model regression techniques. Results were also collected for the five workstation line. These results show a marked improvement in overall availability when the line's last workstation has a high reliability. In addition, preliminary results indicate that three and four workstation series lines' overall availability are linear combinations of each workstation's availability. Finally, proposed topics for future research are presented in eight major areas. These topics include the development of models for parallel series, series parallel, feedback control, assembly, and disassembly type lines. Also, approximation models and decomposition methods are described in detail.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Huber, U. A. "Reliability of reinforced concrete shear resistance". Thesis, Stellenbosch : Stellenbosch University, 2005. http://hdl.handle.net/10019.1/50435.

Texto completo da fonte
Resumo:
Thesis (MScEng)--University of Stellenbosch, 2005.
ENGLISH ABSTRACT: The lack of a simple rational mechanical model for the shear resistance behaviour of structural concrete members results in the use of simplified empirical methods in codified shear design methods with a limited range of applicability. This may lead on the one hand to insufficient reliability for members on the boundary of the range of applicability and on the other hand to over-conservative designs. Comparison of the provision for shear resistance design of the South African code of practice for the design of concrete structures SANS 10100: 2003 with other related codes shows differences in the design variables taken into account and procedures specified to calculate shear resistance. The thesis describes a systematic evaluation of the reliability performance of the shear performance of reinforced concrete sections subjected to shear only, and in combination with flexural moments, designed with SANS 10100: 2003. Both sections with and without provision for shear reinforcement are considered. A representative range of parametric conditions are considered in the evaluation. Punching shear is not considered in the present review. Shear design as specified by SANS 10100 is compared to the provisions of the closely related British code for the structural use of concrete BS 8110, Eurocode 2 for the design of concrete structures EN 1992 and the American bridge design code AASHTO LRFD. The reliability performance of the shear design method for beams of SANS is considered in terms of a probabilistic shear resistance model, uncertainties in the basic variables such as material properties, geometry and modelling uncertainty. Modelling uncertainty is determined by comparing predicted values with published experimental results. Keywords: structural concrete; shear resistance; shear design; reliability; design codes; code companson
AFRIKAANSE OPSOMMING: Die tekortkoming van eenvoudige rasionele modelle vir skuif gedrag van strukturele gewapende beton lei tot die gebruik van vereenvoudigde empiriese metodes in gekodifiseerde skuif ontwerp met 'n beperkte omvang van gebruik. Dit mag lei tot onvoeldoende betroubaarheid vir ontwerp situasies, maar ook tot oorkonserwatiewe ontwerpe. Vergelyking van voorsienings vir skuifweerstand ontwerp in die SANS beton kode, SANS 10100: 2003 en ander verwante kodes toon verskille in ontwerp veranderings en metodes aan vir die berekening van skuifweerstand. Hierdie tesis beskryf die stelselmatige bepaling van betroubaarheids prestasie van die skuifgedrag van gewapende beton snitte ontwerp volgens SANS. Beide snitte met en sonder skuifbewapening word behandel. 'n Verteenwoordigende bestek van skuif ontwerp parameters word in ag geneem in die beoordeling van die betroubaarheid. Pons skuifword nie hier in ag geneem nie. Skuif ontwerp soos voorgeskryf deur SANS 10100 word verlyk met die ontwerp methodes van die Britse beten kode, BS 8110, die Europese beton kode, Euronorm Eurocode 2 en die Amerikaanse brug kode AASHTO LRFD. Die betroubaarheids prestasie van die skuif ontwerp metode vir SANS word bepaal deur middel van 'n probablistiese skuif ontwerp model. Modelonsekerheid is vir die doeleindes bepaal deur vergelyking met gepubliseerde eksperimentele resultate. Sleutelwoorde: strukturele beton; skuifweerstand; skuif ontwerp; betroubaarheid; ontwerp kodes; kode vergelyking.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Muller, Cole. "Reliability analysis of the 4.5 roller bearing". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Jun%5FMuller.pdf.

Texto completo da fonte
Resumo:
Thesis (M.S. in Applied Science (Operations Research))--Naval Postgraduate School, June 2003.
Thesis advisor(s): David H. Olwell, Samuel E. Buttrey. Includes bibliographical references (p. 65). Also available online.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Lee, Joo-Sung. "Reliability analysis of continuous structural systems". Thesis, University of Glasgow, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299455.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Hashimoto, Mitsuyuki. "Vulnerability and reliability of structural systems". Thesis, University of Bristol, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.261335.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Wang, Jia. "Reliability analysis and reliability-based optimal design of linear structures subjected to stochastic excitations /". View abstract or full-text, 2010. http://library.ust.hk/cgi/db/thesis.pl?CIVL%202010%20WANG.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Terrier, Viktor. "North European Power Systems Reliability". Thesis, KTH, Elkraftteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-202581.

Texto completo da fonte
Resumo:
The North European power system (Sweden, Finland, Norway, Denmark, Estonia, Latvia and Lithuania) is facing changes in its electricity production. The increasing share of intermittent power sources, such as wind power, makes the production less predictable. The decommissioning of large plants, for environmental or market reasons, leads to a decrease of production capacity while the demand can increase, which is detrimental to the power system reliability. Investments in interconnections and new power plants can be made to strengthen the system. Evaluating the reliability becomes essential to determine the investments that have to be made. For this purpose, a model of the power system is built. The power system is divided into areas, where the demand, interconnections between areas, and intermittent generation are represented by Cumulative Distribution Functions (CDF); while conventional generation plants follow a two-state behaviour. Imports from outside the system are set equal to their installed capacity, with considering that the neighbouring countries can always provide enough power. The model is set up by using only publicly available data. The model is used for generating numerous possible states of the system in a Monte Carlo simulation, to estimate two reliability indices: the risk (LOLP) and the size (EPNS) of a power deficit. As a power deficit is a rare event, an excessively large number of samples is required to estimate the reliability of the system with a sufficient confidence level. Hence, a pre-simulation, called importance sampling, is run beforehand in order to improve the efficiency of the simulation. Four simulations are run on the colder months (January, February, March, November, December) to test the reliability of the current system (2015) and of three future scenarios (2020, 2025 and 2030). The tests point out that the current weakest areas (Finland and Southern Sweden) are also the ones that will face nuclear decommissioning in years to come, and highlight that the investments in interconnections and wind power considered in the scenarios are not sufficient to maintain the current reliability levels. If today’s reliability levels are considered necessary, then possible solutions include more flexible demand, higher production and/or more interconnections.
Det nordeuropeiska elsystemet (Sverige, Finland, Norge, Danmark, Estland, Lettland och Litauen) står inför förändringar i sin elproduktion. Den ökande andelen intermittenta kraftkällor, såsom vindkraft, gör produktionen mindre förutsägbar. Avvecklingen av stora anläggningar, av miljö- eller marknadsskäl, leder till en minskning av produktionskapaciteten, medan efterfrågan kan öka, vilket är till nackdel för kraftsystemets tillförlitlighet. Investeringar i sammankopplingar och i nya kraftverk kan göras för att stärka systemet. Utvärdering av tillförlitligheten blir nödvändigt för att bestämma vilka investeringar som behövs. För detta ändamål byggs en modell av kraftsystemet. Kraftsystemet är uppdelat i områden, där efterfrågan, sammankopplingar mellan områden, och intermittent produktion representeras av fördelningsfunktioner; medan konventionella kraftverk antas ha ett två-tillståndsbeteende. Import från länder utanför systemet antas lika med deras installerade kapaciteter, med tanke på att grannländerna alltid kan ge tillräckligt med ström. Modellen bygger på allmänt tillgängliga uppgifter. Modellen används för att generera ett stort antal möjliga tillstånd av systemet i en Monte Carlo-simulering för att uppskatta två tillförlitlighetsindex: risken (LOLP) och storleken (EPNS) av en effektbrist. Eftersom effektbrist är en sällsynt händelse, krävs ett mycket stort antal tester av olika tillstånd i systemet för att uppskatta tillförlitligheten med en tillräcklig konfidensnivå. Därför utnyttjas en för-simulering, kallad ”Importance Sampling”, vilken körs i förväg i syfte att förbättra effektiviteten i simuleringen. Fyra simuleringar körs för de kallare månaderna (januari, februari, mars, november, december) för att testa tillförlitligheten i nuvarande systemet (2015) samt för tre framtidsscenarier (2020, 2025 och 2030). Testerna visar att de nuvarande svagaste områdena (Finland och södra Sverige) också är de som kommer att ställas inför en kärnkraftsavveckling under de kommande åren. De indikerar även att planerade investeringar i sammankopplingar och vindkraft i scenarierna inte är tillräckliga för att bibehålla de nuvarande tillförlitlighetsnivåerna. Om dagens tillförlitlighetsnivåer antas nödvändiga, så inkluderar möjliga lösningar mer flexibel efterfrågan, ökad produktion och/eller fler sammankopplingar.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Astley, Kenneth Richard. "A systems engineering approach to servitisation system modelling and reliability assessment". Thesis, Loughborough University, 2011. https://dspace.lboro.ac.uk/2134/8298.

Texto completo da fonte
Resumo:
Companies are changing their business model in order to improve their long term competitiveness. Once where they provided only products, they are now providing a service with that product resulting in a reduced cost of ownership. Such a business case benefits both customer and service supplier only if the availability of the product, and hence the service, is optimised. For highly integrated product and service offerings this means it is necessary to assess the reliability monitoring service which underpins service availability. Reliability monitoring service assessment requires examination of not only product monitoring capability but also the effectiveness of the maintenance response prompted by the detection of fault conditions. In order to address these seemingly dissimilar aspects of the reliability monitoring service, a methodology is proposed which defines core aspects of both the product and service organisation. These core aspects provide a basis from which models of both the product and service organisation can be produced. The models themselves though not functionally representative, portray the primary components of each type of system, the ownership of these system components and how they are interfaced. These system attributes are then examined to establish system risk to reliability by inspection, evaluation of the model or by reference to model source documentation. The result is a methodology that can be applied to such large scale, highly integrated systems at either an early stage of development or in latter development stages. The methodology will identify weaknesses in each system type, indicating areas which should be considered for system redesign and will also help inform the analyst of whether or not the reliability monitoring service as a whole meets the requirements of the proposed business case.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Garbuno, Inigo A. "Stochastic methods for emulation, calibration and reliability analysis of engineering models". Thesis, University of Liverpool, 2018. http://livrepository.liverpool.ac.uk/3026757/.

Texto completo da fonte
Resumo:
This dissertation examines the use of non-parametric Bayesian methods and advanced Monte Carlo algorithms for the emulation and reliability analysis of complex engineering computations. Firstly, the problem lies in the reduction of the computational cost of such models and the generation of posterior samples for the Gaussian Process' (GP) hyperparameters. In a GP, as the flexibility of the mechanism to induce correlations among training points increases, the number of hyperparameters increases as well. This leads to multimodal posterior distributions. Typical variants of MCMC samplers are not designed to overcome multimodality. Maximum posterior estimates of hyperparameters, on the other hand, do not guarantee a global optimiser. This presents a challenge when emulating expensive simulators in light of small data. Thus, new MCMC algorithms are presented which allow the use of full Bayesian emulators by sampling from their respective multimodal posteriors. Secondly, in order for these complex models to be reliable, they need to be robustly calibrated to experimental data. History matching solves the calibration problem by discarding regions of input parameters space. This allows one to determine which configurations are likely to replicate the observed data. In particular, the GP surrogate model's probabilistic statements are exploited, and the data assimilation process is improved. Thirdly, as sampling- based methods are increasingly being used in engineering, variants of sampling algorithms in other engineering tasks are studied, that is reliability-based methods. Several new algorithms to solve these three fundamental problems are proposed, developed and tested in both illustrative examples and industrial-scale models.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

LEE, SEUNG JOO. "RELIABILITY-BASED OPTIMAL STRUCTURAL AND MECHANICAL DESIGN". Diss., The University of Arizona, 1987. http://hdl.handle.net/10150/184136.

Texto completo da fonte
Resumo:
Structural reliability technology provides analytical tools for management of uncertainty in all relevant design factors in structural and mechanical systems. Generally, the goal of analysis is to compute probabilities of failure in structural components or system having single or multiple failure mode. Alternately, modern optimization methods provide efficient numerical algorithms for locating optima, particularly in large-scale systems having prescribed deterministic constraints. Optimization procedure can accommodate random variables either directly in its objective function or as one of the primary constraints. The combination of elementary optimization and probabilistic design techniques is the subject of this study. Presented herein is a general strategy for optimization when the design factors are random variables and some or all of the constraints are probability statements. A literature review has indicated that optimization technology in a reliability context has not been fully explored for the general case of nonlinear performance functions and nonnormal variates associated multiple failure modes. This research focuses upon development of the theory to address this general problem. Because analysis algorithms are complicated, a computer code, program RELOPT, is constructed to automate the analysis. The objective function to be minimized is arbitrary, but would generally be the total expected lifetime costs including all initial costs as well as all costs associated with failure. Uncertainty is assumed to be possible in all design factors (including the factors to be determined), and they are modeled as random variables. In general, all of the constraints can be probability statements. The generalized reduce gradient (GRG) method was used for optimization calculations. Options for point probability calculations are first order reliability analysis using the Rackwitz-Fiessler (R-F) or advanced reliability analysis using Wu/FPI. For system reliability analysis either the first order Cornell's bounds or the second order Ditlevsen's bounds can be specified. Several examples are presented to illustrate the full range of capabilities of RELOPT. The program is validated by checking with independent and exact solutions. An example is provided which demonstrates that the cost of running RELOPT can be substantial as the size of the problem increases.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Jenab, Kouroush. "Stochastic and fuzzy analyses in reliability design". Thesis, University of Ottawa (Canada), 2005. http://hdl.handle.net/10393/29222.

Texto completo da fonte
Resumo:
The risk analysis process involving information acquisition, modeling, analysis, and decision steps results in the product design improvement. To perform product risk assessment, this study addresses stochastic and fuzzy analyses in reliability design. Using decision-making techniques and the flow-graph concept, the main objective of this study is to develop analytical models with time varying input data, and/or fuzzy input data for reliability techniques. The models (i.e., Graph-based failure effects analysis, Group-based failure effects analysis, Imprecise-chance Markov chains, Fuzzy and stochastic fault tree analysis, Binary k-out-of-n system with self-loop units, Reversible multi-state k-out-of-n:G/F/Load sharing system, and Imprecise-chance reliability estimation) incorporate the stochastic self-healing mechanisms represented by self-loop graph, and/or conflict resolution approach. Stochastic models developed in this study compute Time-To-Event/State data made up of probability of the system failure, and mean and standard deviation of time to an event/state. To identify, prioritize and eliminate potential failures in the system, the fuzzy models presented in this study introduce aggregated/compensated approaches for mitigating conflict of input data. The applications of the stochastic and fuzzy models are demonstrated through practical examples. Using typical, practical, and extreme values of the basic parameters of the models and performing sensitivity analysis, the end results demonstrate the robustness of and conflict resolution capability of the models.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Pennisi, Frank Joseph. "Design of a high reliability transport mechanism". Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/35982.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia