Rozprawy doktorskie na temat „Reliability”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Reliability.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Reliability”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Mafu, Masakheke. "Reliability analysis: assessment of hardware and human reliability". Thesis, Rhodes University, 2017. http://hdl.handle.net/10962/6280.

Pełny tekst źródła
Streszczenie:
Most reliability analyses involve the analysis of binary data. Practitioners in the field of reliability place great emphasis on analysing the time periods over which items or systems function (failure time analyses), which make use of different statistical models. This study intends to introduce, review and investigate four statistical models for modeling failure times of non-repairable items, and to utilise a Bayesian methodology to achieve this. The exponential, Rayleigh, gamma and Weibull distributions will be considered. The performance of the two non-informative priors will be investigated. An application of two failure time distributions will be carried out. To meet these objectives, the failure rate and the reliability functions of failure time distributions are calculated. Two non-informative priors, the Jeffreys prior and the general divergence prior, and the corresponding posteriors are derived for each distribution. Simulation studies for each distribution are carried out, where the coverage rates and credible intervals lengths are calculated and the results of these are discussed. The gamma distribution and the Weibull distribution are applied to failure time data.The Jeffreys prior is found to have better coverage rate than the general divergence prior. The general divergence shows undercoverage when used with the Rayleigh distribution. The Jeffreys prior produces coverage rates that are conservative when used with the exponential distribution. These priors give, on average, the same average interval lengths and increase as the value of the parameter increases. Both priors perform similar when used with the gamma distribution and the Weibull distribution. A thorough discussion and review of human reliability analysis (HRA) techniques will be considered. Twenty human reliability analysis (HRA) techniques are discussed; providing a background, description and advantages and disadvantages for each. Case studies in the nuclear industry, railway industry, and aviation industry are presented to show the importance and applications of HRA. Human error has been shown to be the major contributor to system failure.
Style APA, Harvard, Vancouver, ISO itp.
2

Chua, See Ju. "On the reliability of Type II censored reliability analyses". Thesis, Swansea University, 2009. https://cronfa.swan.ac.uk/Record/cronfa42621.

Pełny tekst źródła
Streszczenie:
This thesis considers the analysis of reliability data subject to censoring, and, in particular, the extent to which an interim analysis - here, using information based on Type II censoring - provides a guide to the final analysis. Under a Type II censored sampling, a random sample of n units is put on test simultaneously, and the test is terminated as soon as r (1 ≤ r ≤ n, although we are usually interested in r < n) failures are observed. In the case where all test units were observed to fail (r = n), the sample is complete. From a statistical perspective, the analysis of the complete sample is to be preferred, but, in practice, censoring is often necessary; such sampling plan can save money and time, since it could take a very long time for all units to fail in some instances. From a practical perspective, an experimenter may be interested to know the smallest number of failures at which the experiment can be reasonably or safely terminated with the interim analysis still providing a close and reliable guide to the analysis of the final, complete data. In this thesis, we aim to gain more insight into the roles of censoring number r and sample size n under this sampling plan. Our approach requires a method to measure the precision of a Type II censored estimate, calculated at censoring level r, in estimating the complete estimate, and hence the study of the relationship between interim and final estimates. For simplicity, we assume that the lifetimes follow the exponential distribution, and then adopt the methods to the two- parameter Weibull and Burr Type XII distributions, both are widely used in reliability modelling. We start by presenting some mathematical and computational methodology for estimating model parameters and percentile functions, by the method of maximum likelihood. Expressions for the asymptotic variances and covariances of the estimators are given. In practice, some indication of the likely accuracy of these estimates is often desired; the theory of asymptotic Normality of maximum likelihood estimator is convenient, however, we consider the use of relative likelihood contour plots to obtain approximate confidence regions of parameters in relatively small samples. Finally, we provide formulae of the correlations between the interim and final maximum likelihood estimators of model parameters and a particular percentile function, and discuss some practical implications of our work, based on the results obtained from published data and simulation experiments.
Style APA, Harvard, Vancouver, ISO itp.
3

Brunelle, Russell Dedric. "Customer-centered reliability measures for flexible multistate reliability models /". Thesis, Connect to this title online; UW restricted, 1998. http://hdl.handle.net/1773/10691.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Saini, Gagandeep Singh. "Reliability-based design with system reliability and design improvement". Diss., Rolla, Mo. : Missouri University of Science and Technology, 2009. http://scholarsmine.mst.edu/thesis/pdf/Saini_09007dcc8070d586.pdf.

Pełny tekst źródła
Streszczenie:
Thesis (M.S.)--Missouri University of Science and Technology, 2009.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed November 23, 2009) Includes bibliographical references (p. 66-68).
Style APA, Harvard, Vancouver, ISO itp.
5

Unlusoy, Ozlem. "Reliability Analysis Process And Reliabilty Improvement Of An Inertial Measurement Unit (imu)". Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612387/index.pdf.

Pełny tekst źródła
Streszczenie:
Reliability is one of the most critical performance measures of guided missile systems. It is directly related to missile mission success. In order to have a high reliability value, reliability analysis should be carried out at all phases of the system design. Carrying out reliability analysis at all the phases of system design helps the designer to make reliability related design decisions in time and update the system design. In this study, reliability analysis process performed during the conceptual design phase of a Medium Range Anti-Tank Missile System Inertial Measurement Unit (IMU) was introduced. From the reliability requirement desired for the system, an expected IMU reliability value was derived by using reliability allocation methods. Then, reliability prediction for the IMU was calculated by using Relex Software. After that, allocated and predicted reliability values of the IMU were compared. It was seen that the predicted reliability value of the IMU did not meet the required reliability value. Therefore, reliability improvement analysis was carried out.
Style APA, Harvard, Vancouver, ISO itp.
6

ROBINSON, DAVID GERALD. "MODELING RELIABILITY IMPROVEMENT DURING DESIGN (RELIABILITY GROWTH, BAYES, NON PARAMETRIC)". Diss., The University of Arizona, 1986. http://hdl.handle.net/10150/183971.

Pełny tekst źródła
Streszczenie:
Past research into the phenomenon of reliability growth has emphasised modeling a major reliability characteristic in terms of a specific parametric function. In addition, the time-to-failure distribution of the system was generally assumed to be exponential. The result was that in most cases the improvement was modeled as a nonhomogeneous Poisson process with intensity λ(t). Major differences among models centered on the particular functional form of the intensity function. The popular Duane model, for example, assumes that λ(t) = β(1 – α)t ⁻ᵅ. The inability of any one family of distributions or parametric form to describe the growth process resulted in a multitude of models, each directed toward answering problems encountered with a particular test situation. This thesis proposes two new growth models, neither requiring the assumption of a specific function to describe the intensity λ(t). Further, the first of the models only requires that the time-to-failure distribution be unimodal and that the reliability become no worse as development progresses. The second model, while requiring the assumption of an exponential failure distribution, remains significantly more flexible than past models. Major points of this Bayesian model include: (1) the ability to encorporate data from a number of test sources (e.g. engineering judgement, CERT testing, etc.), (2) the assumption that the failure intensity is stochastically decreasing, and (3) accountability of changes that are incorporated into the design after testing is completed. These models were compared to a number of existing growth models and found to be consistently superior in terms of relative error and mean-square error. An extension to the second model is also proposed that allows system level growth analysis to be accomplished based on subsystem development data. This is particularly significant, in that, as systems become larger and more complex, development efforts concentrate on subsystem levels of design. No analysis technique currently exists that has this capability. The methodology is applied to data sets from two actual test situations.
Style APA, Harvard, Vancouver, ISO itp.
7

Kaya, Deniz. "Software Reliability Assessment". Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/2/12606466/index.pdf.

Pełny tekst źródła
Streszczenie:
In spite of the fact that software reliability studies have attracted great deal of attention from different disciplines in 1970s, applications of the subject have rarely been involved in the software industry. With the rise of technological advances especially in the military electronics field, reliability of software systems gained importance. In this study, a company in the defense industries is inspected for their abilities and needs regarding software reliability, and an improvement proposal with metrics measurement system is formed. A computer tool is developed for the evaluation of the performance of the improvement proposal. Results obtained via this tool indicate improved abilities in the development of reliable software products.
Style APA, Harvard, Vancouver, ISO itp.
8

Chan, Pee Yuaw. "Software reliability prediction". Thesis, City, University of London, 1986. http://openaccess.city.ac.uk/18127/.

Pełny tekst źródła
Streszczenie:
Two methods are proposed to find the maximum likelihood parameter estimates of a number of software reliability models. On the basis of the results from analysing 7 sets of real data, these methods are found to be both efficient and reliable. The simple approach of adapting software reliability predictions by Keiller and Littlewood (1984) can produce improved predictions, but at the same time, introduces a lot of internal noise into the adapted predictions. This is due to the fact that the adaptor is a joined-up function. An alternative adaptive procedure, which involves the parametric spline adaptor, can produce at least as good adapted predictions without the predictions being contaminated by internal noise as in the simple approach. Miller and Sofer (1986a) proposed a method for estimating the failure rate of a program non-parametrically. Here, these non-parametric rates are used to produce reliability predictions and their quality is analysed and compared with the parametric predictions.
Style APA, Harvard, Vancouver, ISO itp.
9

Bhusal, Prabodh. "Distribution reliability analysis". Thesis, Wichita State University, 2007. http://hdl.handle.net/10057/1532.

Pełny tekst źródła
Streszczenie:
This thesis presents an example for optimization of distribution maintenance scheduling of a recloser. It applies a risk reduction technique associated with maintenance of the equipment. Furthermore, this thesis examines how various distribution system designs, including distributed energy resources (DER), affect distribution reliability indices, System Average Interruption Duration Index (SAIDI) and System Average Interruption Frequency Index (SAIFI).
Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical and Computer Engineering
"December 2007."
Style APA, Harvard, Vancouver, ISO itp.
10

Wright, David R. "Software reliability prediction". Thesis, City University London, 2001. http://openaccess.city.ac.uk/8387/.

Pełny tekst źródła
Streszczenie:
This thesis presents some extensions to existing methods of software reliability estimation and prediction. Firstly, we examine a technique called 'recalibration' by means of which many existing software reliability prediction algorithms assess past predictive performance in order to improve the accuracy of current reliability predictions. This existing technique for forecasting future failure times of software is already quite general. Indeed, whenever your predictions are produced in the form of time-to-failure distributions, successively as more actual failure times are observed, you can apply recalibration irrespective both of which probabilistic software reliability model and of which statistical inference technique you are using. In the current work we further generalise the recalibration method to those situations where empirical failure data take the form of failure-counts rather than precise inter-failure times. We then briefly explore how the reasoning we have used, in this extension of recalibration to the prediction of failure-count sequences, might further extend to recalibration of other representations of predicted reliability. Secondly, the thesis contains a theoretical discussion of some modelling possibilities for improving software reliability predictions by the incorporation of disparate sources of data. There are well established techniques for forecasting the reliability of a particular software product using as data only the past failure behaviour of that software under statistically representative operational testing. However, there may sometimes be reasons for seeking improved predictive accuracy by using data of other kinds too, rather than relying on this single source of empirical evidence. Notable among these is the economic impracticability, in many cases, of obtaining sufficient, representative software failure vs. time data (from execution of the particular product in question) to determine, by inference applied to software reliability growth models, whether or not a high reliability requirement has been achieved in a particular case, prior to extensive operational use of the software in question. For example, this problem arises in particular for safety-critical systems, whose required reliability is often extremely high. An accurate reliability assessment is often required in advance of a decision whether to release the software for actual use in the field. Another argument for attempting to determine other usable data sources for software reliability prediction is the value that would attach to rigorous empirical confirmation or refutation of any of the many existing theories and claims about what are the factors of software reliability, and how these factors may interact, in some given context. In those cases, such as some safety-critical systems, in which assessment of a high reliability level is required at an early stage, the necessary assessment is in practice often currently carried out rather informally, and often does claim to take account of many different types of evidence experience of previous, similar systems; evidence of the efficacy of the development process; expert judgement, etc-to supplement the limited available data on past failure vs. time behaviour which emanates from testing of the software within a realistic usage environment. Ideally, we would like this assessment to allow all such evidence to be combined into a final numerical measure of reliability in a scientifically more rigorous way. To address these problems, we first examine some candidate general statistical regression models used in other fields such as medicine and insurance and discuss how these might be applied to prediction of software reliability. We have here termed these models explanatory variables regression models. The goal here would be to investigate statistically how to explain differences in software failure behaviour in terms of differences in other measured characteristics of a number of different statistical 'individuals', or 'experimental units': We discuss the interpretation, within the software reliability context, of this statistical concept of an 'individual', with our favoured interpretation being such that a single statistical reliability regression model would be used to model simultaneously a family of parallel series of inter-failure times emanating from measurably different software products or from measurably different installations of a single software product. In statistical regression terms here, each one of these distinct failure vs. time histories would be the 'response variable' corresponding to one of these 'individuals'. The other measurable differences between these individuals would be captured in the model as explanatory variable values which would differ from one individual to another. Following this discussion, we then leave general regression models to examine a slightly different theoretical approach-to essentially the same question of how to incorporate diverse data within our predictions-through an examination of models for 'unexplained' differences between individuals' failure behaviours. Here, rather than assuming the availability of putative 'explanatory variables' to distinguish our statistical individuals and 'explain' the way that their reliabilities differ, we instead use randomness alone to model their differences in reliability. We have termed the class of models produced by this approach similar products models, meaning models in which we regard the individuals' different likely failure vs. time behaviours as initially (i. e. a priori) indistinguishable to us: Here, either we cannot (or we choose not to attempt with a formal model to) explain the differences between individuals' reliabilities in terms of other metrics applied to our individuals, but we do still expect that the 'similar products" (i. e. the individuals') reliabilities will be different from each other: We postulate the existence of a single probability distribution from which we may assume our individuals' true, unknown reliabilities to have all been drawn independently in a random fashion. We present some mathematical consequences, showing how, within such a modelling framework, prior belief about the distribution of reliabilities assumes great importance for model consequences. We also present some illustrative numerical results that seem to suggest that experience from previous products or environments, so represented within the model-even where very high operational dependability has been achieved in such previous cases-can only modestly improve our confidence in the reliability of a new product, or of an existing product when transferred to a new environment.
Style APA, Harvard, Vancouver, ISO itp.
11

Ozkil, Altan. "Pyrotechnic device reliability". Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/29482.

Pełny tekst źródła
Streszczenie:
Approved for public release; distribution is unlimited.
The Naval Weapons Support Center is planning to implement a bonus system to improve the reliability of pyrotechnic devices. The measure of effectiveness that they wish to use to determine how to award bonuses is the reliability of pyrotechnic devices. The data available to estimate this reliability is based on the current sampling inspection plan in which devices are tested in different environments. The models which include both dependence and independence assumptions between the outcomes of these tests are implemented and estimates of overall reliability along with 95 % lower confidence bound are obtained. The 95 % lower confidence bounds are found by bootstrapping. Using these estimates, models for making the decision to award bonuses are discussed and studied using Monte Carlo simulation.
Style APA, Harvard, Vancouver, ISO itp.
12

Moss, T. R. "Rotating machinery reliability". Thesis, Loughborough University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.311046.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Steel, Donald. "Software reliability prediction". Thesis, Abertay University, 1990. https://rke.abertay.ac.uk/en/studentTheses/4613ff72-9650-4fa1-95d1-1a9b7b772ee4.

Pełny tekst źródła
Streszczenie:
The aim of the work described in this thesis was to improve NCR's decision making process for progressing software products through the development cycle. The first chapter briefly describes the software development process at NCR, detailing documentation review and software testing techniques. The objectives and reasons for investigating software reliability models as a tool in the decision making process are outlined. There follows a short review of software reliability models, with the Littlewood and Verrall Bayesian model considered in detail. The difficulties in using this model to obtain estimates for model parameters and time to next failure are described. These estimation difficulties exist using the model on good datasets, in this case simulated failure data, and the difficulties are compounded when used with real failure data. The problems of collecting and recording failure data are outlined, highlighting the inadequacies of these collected data, and real failure data are analysed. Software reliability models are used in an attempt to quantify the reliability of real software products. The thesis concludes by summarising the problems encountered when using reliability models to measure software products and suggests future research into metrics that are required in this area of software engineering.
Style APA, Harvard, Vancouver, ISO itp.
14

Young, Robert Benjamin. "Reliability Transform Method". Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/33824.

Pełny tekst źródła
Streszczenie:
Since the end of the cold war the United States is the single dominant naval power in the world. The emphasis of the last decade has been to reduce cost while maintaining this status. As the Navyâ s infrastructure decreases, so too does its ability to be an active participant in all aspects of ship operations and design. One way that the navy has achieved large savings is by using the Military Sealift Command to manage day to day operations of the Navyâ s auxiliary and underway replenishment ships. While these ships are an active part of the Navyâ s fighting force, they infrequently are put into harmâ s way. The natural progression in the design of these ships is to have them fully classified under current American Bureau of Shipping (ABS) rules, as they closely resemble commercial ships. The first new design to be fully classed under ABS is the T-AKE. The Navy and ABS consider the T-AKE program a trial to determine if a partnership between the two organizations can extend into the classification of all new naval ships. A major difficulty in this venture is how to translate the knowledge base which led to the development of current military specifications into rules that ABS can use for future ships. The specific task required by the Navy in this project is to predict the inherent availability of the new T-AKE class ship. To accomplish this task, the reliability of T-AKE equipment and machinery must be known. Under normal conditions reliability data would be obtained from past ships with similar mission, equipment and machinery. Due to the unique nature of the T-AKE acquisition, this is not possible. Because of the use of commercial off the shelf (COTS) equipment and machinery, military equipment and machinery reliability data can not be used directly to predict T-AKE availability. This problem is compounded by the fact that existing COTS equipment and machinery reliability data developed in commercial applications may not be applicable to a military application. A method for deriving reliability data for commercial equipment and machinery adapted or used in military applications is required. A Reliability Transform Method is developed that allows the interpolation of reliability data between commercial equipment and machinery operating in a commercial environment, commercial equipment and machinery operating in a military environment, and military equipment and machinery operating in a military environment. The reliability data for T-AKE is created using this Reliability Transform Method and the commercial reliability data. The reliability data is then used to calculate the inherent availability of T-AKE.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
15

Wickstrom, Larry E. "Reliability of Electronics". Thesis, University of North Texas, 2014. https://digital.library.unt.edu/ark:/67531/metadc700024/.

Pełny tekst źródła
Streszczenie:
The purpose of this research is not to research new technology but how to improve existing technology and understand how the manufacturing process works. Reliability Engineering fall under the category of Quality Control and uses predictions through statistical measurements and life testing to figure out if a specific manufacturing technique will meet customer satisfaction. The research also answers choice of materials and choice of manufacturing process to provide a device that will not only meet but exceed customer demand. Reliability Engineering is one of the final testing phases of any new product development or redesign.
Style APA, Harvard, Vancouver, ISO itp.
16

Bhusal, Prabodh Jewell Ward T. "Distribution reliability analysis /". Thesis, A link to full text of this thesis in SOAR, 2007. http://hdl.handle.net/10057/1532.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Hui, Kin-Ping. "Network reliability estimation". Title page, table of contents and abstract only, 2005. http://hdl.handle.net/2440/37952.

Pełny tekst źródła
Streszczenie:
Computing the reliability of a network is a #P-complete problem, therefore estimation by means of simulation often becomes a favourable choice. In modern communication networks, link failure probabilities are usually small and hence network failures become rare events. This poses a challenge to estimate the network reliability. In this thesis we present different techniques for network reliability estimation. There are two main sampling techniques in reliability estimation: combinatorial and permutational sampling. Combinatorial sampling has the advantage of speed but has poor performance in rare event simulations. Permutational sampling gives good simulation performance but at a higher computational cost. We combine the two techniques and propose a hybrid sampling scheme called Tree Cut and Merge. By employing simple bounding together with clever conditional sampling, the TCM scheme achieves over 10(superscript 7) times speed up in certain classes of heterogeneous networks. The Crude Monte Carlo (combinatorial) component in the Tree Cut and Merge scheme may cause problems in some situations. In bad cases, the slow convergence problem re-appears. To address the problem, we modifed the scheme by introducing the Importance Sampling technique. The new Tree Cut and Merge with Importance Sampling scheme maintained the speed advantage of the Tree Cut and Merge and minimizes, at the same time, the potential problems caused by the Crude Monte Carlo component. Associated with the Importance Sampling technique, a new technique called the Cross-Entropy method has been developed in the late 90's to find the optimal Importance Sampling parameters. By employing the Cross-Entropy technique, we propose a new scheme called the Merge Process with Cross-Entropy. The new scheme improves the Merge Process in nearly all classes of network; in contrast, Tree Cut and Merge with Importance Sampling scheme sees the greatest improvement in heterogeneous networks. Besides estimating the reliability of a single network, this thesis also investigates a closely related problem: estimating the difference in reliability of two very similar networks. The problem is closely linked to the applications in the areas of network optimization, network evolution, reconfiguration and recovery, for example. The fact that the probabilities of rare events are hard to estimate makes estimating their difference even more difficult. Coupled and differential sampling techniques are proposed and applied to various schemes in this thesis. They prove to be superior to the conventional independent "estimate and subtract" method. Interestingly, these concepts also lead to new ideas regarding the estimation of the reliability of networks that are similar to networks with polynomially computable reliability.
Thesis (Ph.D.)--School of Mathematical Sciences, 2005.
Style APA, Harvard, Vancouver, ISO itp.
18

Mason, Haley Alissa. "Determining Reliability Of The PEAK Assessment Tool Using Split Half Reliability". OpenSIUC, 2015. https://opensiuc.lib.siu.edu/theses/1789.

Pełny tekst źródła
Streszczenie:
The present study looked at the internal reliability of the PEAK Relational Training Assessment, using a split-half method of measurement. The reliability of the assessment questions within each of the four factors, within the PEAK Relational Training Assessment was estimated through this process. Eighteen participants, between the ages of 26 months and ten years old were included in the study. All participants had been diagnosed with either a language based or developmental disability, including autism, seizure disorder, Down syndrome and related language disorders. The PEAK Relational Training Assessment (PEAK-D) was administered by a direct-care provider for each of the 18 participants and during standard instructional periods. Results indicate that for each of the 18 participants, there was a strong correlation between scores when one half of the items in each factor were compared to the remaining half. Results did show internal reliability for the PEAK-D when using split-half methodology.
Style APA, Harvard, Vancouver, ISO itp.
19

Er, Kim Hua. "Analysis of the reliability disparity and reliability growth analysis of a combat system using AMSAA extended reliability growth models". Thesis, Monterey, California. Naval Postgraduate School, 2005. http://hdl.handle.net/10945/1788.

Pełny tekst źródła
Streszczenie:
The first part of this thesis aims to identify and analyze what aspects of the MIL-HDBK-217 prediction model are causing the large variation between prediction and field reliability. The key findings of the literature research suggest that the main reason for the inaccuracy in prediction is because of the constant failure rate assumption used in MIL-HDBK-217 is usually not applicable. Secondly, even if the constant failure rate assumption is applicable, the disparity may still exist in the presence of design and quality related problems in new systems. A possible solution is to apply reliability growth testing (RGT) to new systems during the development phase in an attempt to remove these design deficiencies so that the system's reliability will grow and approach the predicted value. In view of the importance of RGT in minimizing the disparity, this thesis provides a detailed application of the AMSAA Extended Reliability Growth Models to the reliability growth analysis of a combat system. It shows how program managers can analyze test data using commercial software to estimate the system demonstrated reliability and the increased in reliability due to delayed fixes.
Style APA, Harvard, Vancouver, ISO itp.
20

林達明 i Daming Lin. "Reliability growth models and reliability acceptance sampling plans from a Bayesian viewpoint". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B3123429X.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Kaidis, Christos. "Wind Turbine Reliability Prediction : A Scada Data Processing & Reliability Estimation Tool". Thesis, Uppsala universitet, Institutionen för geovetenskaper, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-221135.

Pełny tekst źródła
Streszczenie:
This research project discusses the life-cycle analysis of wind turbines through the processing of operational data from two modern European wind farms. A methodology for SCADA data processing has been developed combining previous research findings and in-house experience followed by statistical analysis of the results. The analysis was performed by dividing the wind turbine into assemblies and the failures events in severity categories. Depending on the failure severity category a different statistical methodology was applied, examining the reliability growth and the applicability of the “bathtub curve” concept for wind turbine reliability analysis. Finally, a methodology for adapting the results of the statistical analysis to site-specific environmental conditions is proposed.
Style APA, Harvard, Vancouver, ISO itp.
22

Lin, Daming. "Reliability growth models and reliability acceptance sampling plans from a Bayesian viewpoint /". Hong Kong : University of Hong Kong, 1995. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13999618.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Yu, Xuebei. "Distribution system reliability enhancement". Thesis, Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41091.

Pełny tekst źródła
Streszczenie:
Practically all everyday life tasks from economic transactions to entertainment depend on the availability of electricity. Some customers have come to expect a higher level of power quality and availability from their electric utility. Federal and state standards are now mandated for power service quality and utilities may be penalized if the number of interruptions exceeds the mandated standards. In order to meet the requirement for safety, reliability and quality of supply in distribution system, adaptive relaying and optimal network reconfiguration are proposed. By optimizing the system to be better prepared to handle a fault, the end result will be that in the event of a fault, the minimum number of customers will be affected. Thus reliability will increase. The main function of power system protection is to detect and remove the faulted parts as fast and as selectively as possible. The problem of coordinating protective relays in electric power systems consists of selecting suitable settings such that their fundamental protective function is met under the requirements of sensitivity, selectivity, reliability, and speed. In the proposed adaptive relaying approach, weather data will be incorporated as follows. By using real-time weather information, the potential area that might be affected by the severe weather will be determined. An algorithm is proposed for adaptive optimal relay setting (relays will optimally react to a potential fault). Different types of relays (and relay functions) and fuses will be considered in this optimization problem as well as their coordination with others. The proposed optimization method is based on mixed integer programming that will provide the optimal relay settings including pickup current, time dial setting, and different relay functions and so on. The main function of optimal network reconfiguration is to maximize the power supply using existing breakers and switches in the system. The ability to quickly and flexibly reconfigure the power system of an interconnected network of feeders is a key component of Smart Grid. New technologies are being injected into the distribution systems such as advanced metering, distribution automation, distribution generation and distributed storage. With these new technologies, the optimal network reconfiguration becomes more complicated. The proposed algorithms will be implemented and demonstrated on a realistic test system. The end result will be improved reliability. The improvements will be quantified with reliability indexes such as SAIDI.
Style APA, Harvard, Vancouver, ISO itp.
24

Chen, Hua. "Generating system reliability optimization". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0034/NQ63854.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Vromans, Michiel Johannes Cornelius Maria. "Reliability of railway systems". [Rotterdam]: Erasmus Research Institute of Management (ERIM), Erasmus University Rotterdam ; Rotterdam : Erasmus University Rotterdam [Host], 2005. http://hdl.handle.net/1765/6773.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Stewart, Scott C. "General aviation reliability study /". Available to subscribers only, 2007. http://proquest.umi.com/pqdweb?did=1328062981&sid=36&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Nytomt, Fredrik. "Service reliability and maintainability /". Luleå, 2004. http://epubl.luth.se/1402-1757/2004/55.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Alemzadeh, Kazem. "Microprocessor-based reliability monitoring". Thesis, University of Bradford, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.253316.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Mohd, Nawi Illani Binti. "Reliability of analogue circuits". Thesis, University of Southampton, 2018. https://eprints.soton.ac.uk/423575/.

Pełny tekst źródła
Streszczenie:
The reliability of CMOS circuits has worsened due to technology scaling. From the review of previous work on reliability study for CMOS circuits, it has been found that both digital and analogue circuits were susceptible to single event effects. Single event effects although causing non-permanent errors have already been identified to have caused billion of dollars worth of lost. Single event transients have been established as one of the single event effects which may reduce the reliability of analogue circuits and safety critical systems, in general. The impact of radiation effects on analogue circuits has been investigated in this thesis using circuit-level single event transient modelling. The characterization of impact of single event transient has been investigated for several analogue circuits. These analogue circuits; namely operational amplifier and comparator, have been recognized to be susceptible to single event transients. Several influencing factors have been associated with previous works and this study to have impacts on the severity of the single event transients to these circuits. Sensitivity analysis has been completed to determine the most and the least sensitive transistor to be used in the variability analysis. The variability analysis addresses the impact of the influencing factors and this information may be used in finding the trade-off which exists between the influencing factors and single event transient. These trade-offs may also be used in mitigating the single event transient. A simple mitigation technique still at preliminary stage has also been included, as part of this study.
Style APA, Harvard, Vancouver, ISO itp.
30

Dalla, Valle Paola. "Reliability in pavement design". Thesis, University of Nottingham, 2015. http://eprints.nottingham.ac.uk/28999/.

Pełny tekst źródła
Streszczenie:
This research presents a methodology that accounts for variability of key pavement design input variables and variations due to lack-of-fit of the design models and assesses effects on pavement performance (fatigue and deformation life). Variability is described by statistical terms such as mean and standard deviation and by its probability density distribution. The subject of reliability in pavement design has pushed many highway organisations around the world to review their design methodologies to evaluate the effect of variations in materials on pavement performance. This research has reinforced this need for considering the variability of design parameters in the design procedure and to conceive a pavement system in a probabilistic way, similar to structural designs. This study has only considered flexible pavements. The sites considered for the analysis, all in the UK (including Northern Ireland), were mainly motorways or major trunk roads. Pavement survey data analysed were for Lane 1, the most heavily trafficked lane. Sections 1km long were considered wherever possible. Statistical characterisation of the variation of layer thickness, asphalt stiffness and subgrade stiffness input parameters is addressed. A model is then proposed which represents an improvement on the Method of Equivalent Thickness for the calculation of strains and life for flexible pavements. The output is a statistical assessment of the estimated pavement performance. The proposed model to calculate the fatigue and deformation life is very fast and simple, and is well suited to use in a pavement management system where stresses and strains must be calculated millions of times. The research shows that the parameters with the greatest influence on the variability of predicted fatigue performance are the asphalt stiffness modulus and thickness. The parameters with the greatest influence on the variability of predicted deformation performance are the granular subbase thickness, the asphalt thickness and the subgrade stiffness.
Style APA, Harvard, Vancouver, ISO itp.
31

Tarokh, M. J. "Systems reliability performance modelling". Thesis, University of Bradford, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.715427.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Paniagua, Sánchez-Mateos Jesús. "Reliability-Constrained Microgrid Design". Thesis, KTH, Skolan för elektro- och systemteknik (EES), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-187715.

Pełny tekst źródła
Streszczenie:
Microgrids are new challenging power systems under development. This report presents a feasibility study of microgrid development. This is an essential task before implementing microgrid systems. It is extremely important to know the number and size of distributed energy resources (DERs) needed and it is necessary to compare investment costs with benefits in order to evaluate the profitability of microgrids. Under the assumption that a large number of DERs improves the reliability of microgrids an optimization problem is formulated to get the accurate mix of distributed energy resources. Uncertainty in physical and financial parameters is taken into account to model the problem considering different scenarios.  Uncertainty takes place in load demanded, renewable energy generation and electricity market price forecasts, availability of distributed energy resources and the microgrid islanding. It is modeled in a stochastic way. The optimization problem is formulated firstly as a mixed-integer programming solved via branch and bound and then it is improved formulating a two stage problem using Benders’ Decomposition which shortens the problem resolution. This optimization problem is divided in a long-term investment master problem and a short-term operation subproblem and it is solved iteratively until it reaches convergence. Bender’s Decomposition optimization problem is applied to real data from the Illinois Institute of Technology (IIT) and it gives the ideal mix of distributed energy resources for different uncertainty scenarios. These distributed energy resources are selected from an initial set. It proves the usefulness of this optimization technique which can be also applied to different microgrids and data. The different solutions obtained for different scenarios are explained and analyzed. They show the possibility of microgrid implementation and determine the most favorable scenarios to reach the microgrid implementation successfully.  Reliability is a term highly linked to the microgrid concept and one of the most important reasons of microgrid development. Thus an analysis of reliability importance is implemented using the importance index of interruption cost (  ) in order to measure the reliability improvement of developing microgrids. It shows and quantifies the reliability improvement in the system.
Style APA, Harvard, Vancouver, ISO itp.
33

BALDISSONE, GABRIELE. "Process Intensification Vs. Reliability". Doctoral thesis, Politecnico di Torino, 2014. http://hdl.handle.net/11583/2556157.

Pełny tekst źródła
Streszczenie:
Over the centuries the equipment used by the process industry went through little changes: it have been perfected but it have never been substantially changed. Indeed the type of chemical reactor currently used is the stirred tank, that works in the same way of a similar one built in 1800; logically, materials, control systems or safety systems changed, but the basic engineering remained the same. In recent years, a new equipment was proposed: it performs the same functions as the existing one, occupying less space, requiring less power and operating in a safer way. The changes required in a plant to achieve the above mentioned objectives are called Process Intensification; it can be described as the following: “Any chemical engineering development that leads to a substantially smaller, cleaner, safer and more energy efficient technology is process intensification”. From the Process Identification point of view, it is possible to mention the development of new equipment, such as the Spinning Disk Reactors and Heat Exchange (HEX) Reactors, characterized by a remarkable technological jump with respect to the existing equipment: designers began to use physical phenomena previously neglected, such as the centrifugal force in the spinning disk reactor, or to combine into one equipment more unit operations, such as Reverse-Flow Reactors, Reactive Distillation ... These recent developments certainly provide more compact and cleaner plants, but there are more uncertainties about their capability to produce an actual increase of the safety. The use of more complex equipment, in example with moving parts or with more intense sources of energy, can even bring to safety problems not detected in traditional plants, also modifying the reliability of the system. Under the definition of Process Intensification it is possible to indicate different kinds of improvements to the plants: in order to analyze the effects of these improvements on safety and reliability, we made an assessment of the reliability in a traditional plant, and in an intensified plant, comparing their results. The process analyzed is related to a plant for the VOC (Volatile Organic Compound) abatement in a stream of inert gas. The traditional system is based on a fixed bed reactor; the intensified plant uses a Reverse-Flow Reactors. The selected plants were firstly subjected to a traditional safety analysis, using an operability analysis and then operating the extraction and quantification of the fault trees. During the analysis, we realized that the traditional methods (HAZOP and FT) worked well if applied to conventional systems, which arrive to a steady-state, but they were less suitable for modern plants, that work in a transitional regime. After the traditional safety analysis, we proceeded with a Integrated Dynamic Decision Analysis, that allows to evaluate more in detail the behavior for not stationary plants, in case of failure. From the application of the methodology to the specific case some general conclusions have been drawn.
Style APA, Harvard, Vancouver, ISO itp.
34

Masiello, Gregory L. "Reliability the life cycle driver : an examination of reliability management culture and practices". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://sirsi.nps.navy.mil/uhtbin/hyperion-image/02Mar%5FMasiello.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Kallan, Michael A. "Characterizing reliability for a Faculty Climate Survey: Estimation model dependencies and reliability generalization". Diss., The University of Arizona, 2003. http://hdl.handle.net/10150/280288.

Pełny tekst źródła
Streszczenie:
Methods. Four reliability estimation models were employed to obtain estimates for faculty appointment and gender group measures derived from four questionnaire scales of a Faculty Climate Survey. Faculty responses were analyzed via (a) coefficient alpha, (b) IRT-Rasch, (c) IRT-Unfolding, and (d) CFA methods. Estimates and their components were compared across-groups within scale and within-group across scales to determine differences among estimation models and to uniquely characterize those differences. Scale dimensionality was assessed per-scale per-group using CFA. Secondary analyses included: (a) independent and dependent-group tests to determine the statistical significance of coefficient alpha differences; (b) bootstrapping simulation to determine the effect of sample size on estimates; and (c) analysis of variance to determine whether attitudinal differences existed between appointment, gender, or appointment-by-gender groups. Results. (1) Reliability estimation models identified important differences between appointment and gender group estimates for scale measures and among scale estimates for each group's set of scale measures. (2) Models were not equally sensitive to detecting differences, either between groups or among scales per group. (3) Alpha and CFA estimates did not always function as lower- and upper-bounds of an expected estimate range: 30% of alpha-CFA range "endpoints" were underestimates of observed ranges. (4) IRT-based estimates were generally located between alpha and CFA estimates, closer to alpha than to CFA estimates. (5) IRT-Unfolding estimates were frequently but not always greater than IRT-Rasch estimates: 30% were less. (6) Alpha and CFA estimation components did not provide comparable item-level information; thus, alpha and CFA plans for characterizing and improving scales differed. (7) IRT-Rasch and IRT-Unfolding estimation components did not provide comparable person-measure information, thereby informing observed differences in IRT-based estimates. (8) Sample size had an effect on CFA estimation: samples of N = 50 achieved highest estimates; samples of N = 500 best reproduced original estimates and components. (9) Modeling error via CFA made meaningful contributions to understanding scale functioning. (10) ANOVA findings were potentially modifiable (e.g., effect sizes), considering obtained reliability estimates. Conclusion. Reliability estimates have group, measure, and model-dependencies that influence the size and nature of obtained estimates and must be accounted for when estimates are interpreted.
Style APA, Harvard, Vancouver, ISO itp.
36

Wang, Jia. "Reliability analysis and reliability-based optimal design of linear structures subjected to stochastic excitations /". View abstract or full-text, 2010. http://library.ust.hk/cgi/db/thesis.pl?CIVL%202010%20WANG.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Owens, Gethin Lloyd. "Design of a reliability methodology : modelling the influence of temperature on gate oxide reliability". Thesis, Durham University, 2007. http://etheses.dur.ac.uk/2695/.

Pełny tekst źródła
Streszczenie:
An Integrated Reliability Methodology (IRM) is presented that encompasses the changes that technology growth has brought with it and includes several new device degradation models. Each model is based on a physics of failure approach and includes on the effects of temperature. At all stages the models are verified experimentally on modern deep sub-micron devices. The research provides the foundations of a tool which gives the user the opportunity to make appropriate trade-offs between performance and reliability, and that can be implemented in the early stages of product development.
Style APA, Harvard, Vancouver, ISO itp.
38

Wang, Shuo. "The Reliability Paradox: When High Reliability Does not Signal Reliable Detection of Experimental Effects". The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1556893720324442.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Hamill, Margaret L. "Empirical analysis of software reliability". Morgantown, W. Va. : [West Virginia University Libraries], 2006. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4562.

Pełny tekst źródła
Streszczenie:
Thesis (M.S.)--West Virginia University, 2006.
Title from document title page. Document formatted into pages; contains vi, 62 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 58-60).
Style APA, Harvard, Vancouver, ISO itp.
40

Mwanga, Alifas Yeko. "Reliability modelling of complex systems". Thesis, Pretoria : [s.n.], 2006. http://upetd.up.ac.za/thesis/available/etd-12142006-121528.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Ankar, Marcus. "Reliability in code generating software". Thesis, University West, Department of Informatics and Mathematics, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-588.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Solver, Torbjörn. "Reliability in performance-based regulation". Licentiate thesis, KTH, School of Electrical Engineering (EES), 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-606.

Pełny tekst źródła
Streszczenie:

In reregulated and restructured electricity markets the production and retail of electricity is conducted on competitive markets, the transmission and distribution on the other hand can be considered as natural monopolies. The financial regulation of Distribution System Operators (DSOs) has in many countries, partly as a consequence of the restructuring in ownership, gone through a major switch in regulatory policy. From applying regulatory regimes were the DSOs were allowed to charge their customers according to their actual cost plus some profit, i.e. cost-based regulation, to regulatory models in which the DSOs performance are valued in order to set the allowable revenue, i.e. Performance-Based Regulation (PBR). In regulatory regimes that value performance, the direct link between cost and income is weakened or sometimes removed. This give the regulated DSOs strong cost cutting incentives and there is consequently a risk of system reliability deterioration due to postponed maintenance and investments in order to save costs. To balance this risk the PBR-framework is normally complemented with some kind of quality regulation (QR). How both the PBR and QR frameworks are constructed determines the incentive that the DSO will act on and will therefore influence the system reliability development.

This thesis links the areas of distribution system reliability and performancebased regulation. First, the key incentive features within PBR, that includes the quality of supply, are identified using qualitative measures that involve analyses of applied regulatory regimes, and general regulatory policies. This results in a qualitative comparison of applied PBR models. Further, the qualitative results are quantified and analysed further using time sequential Monte Carlo simulations (MCS). The MCS enables detailed analysis of regulatory features, parameter settings and financial risk assessments. In addition, the applied PBRframeworks can be quantitatively compared. Finally, some focus have been put on the Swedish regulation and the tool developed for DSO regulation, the Network Performance Assessment Model (NPAM), what obstacles there might be and what consequences it might bring when in affect.

Style APA, Harvard, Vancouver, ISO itp.
43

Kvalø, Tarjei Olsen. "Reliability of Ignition Source Isolation". Thesis, Norwegian University of Science and Technology, Department of Engineering Cybernetics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10934.

Pełny tekst źródła
Streszczenie:

This thesis work was a study into how the effectiveness of ignition source isolation can be estimated. This safety system works by isolating electrical equipment from power when flammable gas is detected on oil and gas installations. Improving the understanding of how effective this system actually is at reducing explosion risk in hazardous areas was the main motivation, as this could help operators and authorities form a more accurate risk picture. The main part of the work is the development and discussion of a model for ignition that was made so that it could be used to estimate this effectiveness. A detailed model is presented first, based on evaluating failure modes of equipment, then suggestions are made for how it could be simplified to be of practical use in risk analysis. The second part of the project was to gather enough data from industry sources to be able to estimate key parameters in the model relating to the failure probability of Ex barriers and the ignition probability resulting from common types of process equipment. Not enough data was found to support a quantification of these parameters, but results from a major maintenance project on an oil and gas installation in the Norwegian sector was reviewed and discussed. The method of systematically evaluating failure modes in order to determine risk could be useful in other applications, and the suggested way to proceed with the work in this report is to continue gathering data and analyzing it to build up a credible set of failure probabilities for Ex barriers and common equipment types.

Style APA, Harvard, Vancouver, ISO itp.
44

Berntsen, Per Ivar Barth. "Structural reliability based position mooring". Doctoral thesis, Norwegian University of Science and Technology, Department of Marine Technology, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-2134.

Pełny tekst źródła
Streszczenie:

This thesis considers control of moored marine structures, referred to as position mooring. Moored marine structures can take on a number of different forms, and two applications are considered in this work, namely aquacultural farms and petroleum producing vessels. It is anticipated that future aquacultural farms will be significantly larger than the existing ones, and placed in much more exposed areas. Hence, there is a significant technology transfer potential between the two seemingly different fields of aquaculture and petroleum exploitation.

Today’s implemented state of the art positioning controllers use predetermined safety regions and gain-scheduling for evaluating the necessary thruster force for the vessel to operate safely. This represents a suboptimal solution; the operator is given a significant number of variables to consider, and the thrusters are run more than necessary. Also, it is likely that a more conservative controller regime does not necessarily increase the overall reliability of the structure as compared with a less conservative but better designed controller.

Motivated by this, a new control methodology and strategy for position mooring is developed. Two controllers using information about the reliability of the mooring system are implemented and tested, both via numerical simulations and model scale experiments. The first controller developed uses a reliability criterion based on the tension in the mooring system as a pretuning device. A nonlinear function based on the energy contained by the system is included in the controller to ensure that the thrusters are run only when needed. The controller is an output-feedback controller, based on measurement of position and estimated values of the velocities and slowly varying environmental loads. The second controller developed contains the reliability criterion intrinsically, thus, less pretuning is needed. The backstepping technique is applied during the design process, and the controller has global asymptotical stability properties.

Style APA, Harvard, Vancouver, ISO itp.
45

Fugelseth, Lars, i Stian Frydenlund Lereng. "Code Coverage and Software Reliability". Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9220.

Pełny tekst źródła
Streszczenie:

With an ever-growing competition among software vendors in supplying customers with tailored, high-quality systems, an emphasis is put on creating products that are well-tested and reliable. During the last decade and a half numerous articles have been published that deal with code coverage and its effect, whether existent or not, on reliability. The last few years have also witnessed an increasing number of software tools for automating the data collection and presentation of code coverage information for applications being tested. In this report we aim at presenting available and frequently used measures of code coverage, the practical applications and typical misconceptions of code coverage and its role in software development nowadays. Then we take a look at the notion of reliability in computer systems and which elements that constitute a software reliability model. With the basics of code coverage and reliability estimation in place, we try to assess the status of the relationship between code coverage and reliability, highlight the arguments for and against its existence and briefly survey a few proposed models for connecting code coverage to reliability. Finally, we examine an open-source tool for automated code coverage analysis, focusing on its implementation of the coverage measures it supports, before assessing the feasibility of integrating a proposed approach for reliability estimation into this software utility.

Style APA, Harvard, Vancouver, ISO itp.
46

Lenz, Malte, i Johan Rhodin. "Reliability calculations for complex systems". Thesis, Linköpings universitet, Reglerteknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-69952.

Pełny tekst źródła
Streszczenie:
Functionality for efficient computation of properties of system lifetimes was developed, based on the Mathematica framework. The model of these systems consists of a system structure and the components independent lifetime distributions. The components are assumed to be non-repairable. In this work a very general implementation was created, allowing a large number of lifetime distributions from Mathematica for all the component distributions. All system structures with a monotone increasing structure function can be used. Special effort has been made to compute fast results when using the exponential distribution for component distributions. Standby systems have also been modeled in similar generality. Both warm and cold standby components are supported. During development, a large collection of examples were also used to test functionality and efficiency. A number of these examples are presented. The implementation was evaluated on large real world system examples, and was found to be efficient. New results are presented for standby systems, especially for the case of mixed warm and cold standby components.
Style APA, Harvard, Vancouver, ISO itp.
47

Solver, Torbjörn. "Reliability in performance-based regulation /". Stockholm:, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-606.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Sasse, Guido Theodor. "Reliability engineering in RF CMOS". Enschede : University of Twente [Host], 2008. http://doc.utwente.nl/59032.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Wittchen, Hans-Ulrich, Cecilia Ahmoi Essau, Heidemarie Hecht, Wolfgang Teder i Hildegard Pfister. "Reliability of life event assessments". Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-103810.

Pełny tekst źródła
Streszczenie:
This paper presents the findings of two independent studies which examined the test-retest reliability and the fall-off effects of the Munich Life Event List (MEL). The MEL is a three-step interview procedure for assessing life incidents which focusses on recognition processes rather than free recall. In a reliability study, test–retest coefficients of the MEL, based on a sample of 42 subjects, were quite stable over a 6-week interval. Stability for severe incidents appeared to be higher than for the less severe ones. In the fall-off study, a total rate of 30% fall-off was noted for all incidents reported retrospectively over an 8-year period. A more detailed analysis revealed average monthly fall-off effects of 0.36%. The size of fall-off effects was higher for non-severe and positive incidents than for severe incidents. This was particularly evident for the symptomatic groups. Non-symptomatic males reported a higheroverall number of life incidents than females. This was partly due to more frequent reporting of severe incidents. The findings of the fall-off study do not support the common belief that the reliability oflife incident report is much worse when the assessment period is extended over a period of several years as compared to the traditional 6-month period.
Style APA, Harvard, Vancouver, ISO itp.
50

Naumann, Michael. "MEMS reliability in shock environments". Doctoral thesis, Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-117360.

Pełny tekst źródła
Streszczenie:
In der vorliegenden Arbeit wird eine Methode vorgestellt, mit welcher die Zuverlässigkeit mikroelektromechanischer Systeme (MEMS) bezüglich stoßinduzierter Fehlermechanismen bereits in der Entwurfsphase neuer Produkte abgeschätzt bzw. verbessert werden kann. Der Ansatz bezieht sich dabei auf bruch- sowie adhäsionsbedingte Ausfallmechanismen und erfordert zwei wesentliche Schritte. Zuerst werden Systemmodelle der jeweils zu untersuchenden mikromechanischen Systeme erstellt, welche die Berechnung der Stoßantwort wie auch der dabei auftretenden Belastungen in Sinne von Auslenkungen, Deformationen und Aufprallkräften ermöglichen. In einem zweiten Schritt wird die zur Fertigung vorgesehene Technologie bezüglich des Auftretens beider stoßbedingter Ausfallmechanismen sowie deren Abhängigkeit von verschiedenen Umgebungsbedingungen oder Betriebsparametern systematisch untersucht. Die aus der Prozesscharakterisierung resultierenden Daten dienen zur Ableitung prozessspezifischer Fehlerkriterien, welche die Einschätzung der zuvor berechneten Lasten ermöglichen. Auf diese Weise kann abgeschätzt werden, inwieweit die Zuverlässigkeit der betrachteten mikromechanischen Strukturen beeinflusst wird bzw. mit welchen Maßnahmen diese gesteigert werden kann.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii