Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Reliability (Engineering) Mathematical models.

Dissertationen zum Thema „Reliability (Engineering) Mathematical models“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Reliability (Engineering) Mathematical models" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Lu, Jin 1959. „Degradation processes and related reliability models“. Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=39952.

Der volle Inhalt der Quelle
Annotation:
Reliability characteristics of new devices are usually demonstrated by life testing. When lifetime data are sparse, as is often the case with highly reliable devices, expensive devices, and devices for which accelerated life testing is not feasible, reliability models that are based on a combination of degradation and lifetime data represent an important practical approach. This thesis presents reliability models based on the combination of degradation and lifetime data or degradation data alone, with and without the presence of covariates. Statistical inference methods associated with the models are also developed.
The degradation process is assumed to follow a Wiener process. Failure is defined as the first passage of this process to a fixed barrier. The degradation data of a surviving item are described by a truncated Wiener process and lifetimes follow an inverse Gaussian distribution. Models are developed for three types of data structures that are often encountered in reliability studies, terminal point data (a combination of degradation and lifetime data) and mixed data (an extended case of terminal point data); conditional degradation data; and covariate data.
Maximum likelihood estimators (MLEs) are derived for the parameters of each model. Inferences about the parameters are based on asymptotic properties of the MLEs and on the likelihood ratio method. An analysis of deviance is presented and approximate pivotal quantities are derived for the drift and variance parameters. Predictive density functions for the lifetime and the future degradation level of either a surviving item or a new item are obtained using empirical Bayes methods. Case examples are given to illustrate the applications of the models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Jiang, Siyuan. „Mixed Weibull distributions in reliability engineering: Statistical models for the lifetime of units with multiple modes of failure“. Diss., The University of Arizona, 1991. http://hdl.handle.net/10150/185481.

Der volle Inhalt der Quelle
Annotation:
The finite mixed Weibull distribution is an appropriate distribution in modeling the lifetime of the units having more than one possible failure cause. Due to the lack of a systematic statistical procedure of fitting the distribution to a data set, it has not been widely used in lifetime data analyses. Many areas on this subject have been studied in this research. The following are the findings and contributions. Through a change of variable, 5 parameters in a two Weibull mixture can be reduced to 3. A parameter'vector (p₁, η, β) defines a family of two-Weibull mixtures which have common characteristics. Numerous probability plots are investigated on Weibull probability paper (WPP). For a given p₁ the η-β plane is partitioned into seven regions which are labeled by A through F and S. The Region S represents the two Weibull mixtures whose cdf curves are very close to a straight line. The Regions A through F represent six typical shapes of the cdf curves on WPP, respectively. The two-Weibull mixtures in one region have similar characteristics. Three important features of the two-Weibull mixture with well separated subpopulations are proved. Two existing methods for the graphical estimation of the parameters are discussed, and one is recommended over the other. The EM algorithm is successfully applied to solve the MLE for mixed Weibull distributions when m, the number of subpopulations in a mixture is known. The algorithms for complete, censored, grouped and suspended samples with non-postmortem and postmortem failures are developed accordingly. The developed algorithms are powerful, efficient and they are insensitive to the initial guesses. Extensive Monte Carlo simulations are performed. The distributions of the MLE of the parameters and of the reliability of a two Weibull mixture are studied. The MLEs of the parameters are sensitive to the degree of separation of the two subpopulation pdfs, but the MLE of the reliability is not. The generalized likelihood ratio (GLR) test is used to determine m. Under H₀: m=1 and H₁: m=m₁>1, ζ, the GLR is independent of the parameters in the distribution of H₀. The distributions of ζ or -21n(ζ) with n=50, 100 and 150 are obtained through Monte Carlo simulations. Compared with the chi-square distribution, they fall in between x²(4) and x²(6), and they are very close to x²(5). A FORTRAN computer program is developed to conduct simulation of the GLR test for 1 ≤ m₀ < m₁ ≤ 5.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Hashemolhosseini, Sepehr. „Algorithmic component and system reliability analysis of truss structures“. Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/85710.

Der volle Inhalt der Quelle
Annotation:
Thesis (MScEng)-- Stellenbosch University, 2013.
ENGLISH ABSTRACT: Most of the parameters involved in the design and analysis of structures are of stochastic nature. This is, therefore, of paramount importance to be able to perform a fully stochastic analysis of structures both in component and system level to take into account the uncertainties involved in structural analysis and design. To the contrary, in practice, the (computerised) analysis of structures is based on a deterministic analysis which fails to address the randomness of design and analysis parameters. This means that an investigation on the algorithmic methodologies for a component and system reliability analysis can help pave the way towards the implementation of fully stochastic analysis of structures in a computer environment. This study is focused on algorithm development for component and system reliability analysis based on the various proposed methodologies. Truss structures were selected for this purpose due to their simplicity as well as their wide use in the industry. Nevertheless, the algorithms developed in this study can be used for other types of structures such as moment-resisting frames with some simple modi cations. For a component level reliability analysis of structures different methods such as First Order Reliability Methods (FORM) and simulation methods are proposed. However, implementation of these methods for the statistically indeterminate structures is complex due to the implicit relation between the response of the structural system and the load effect. As a result, the algorithm developed for the purpose of component reliability analysis should be based on the concepts of Stochastic Finite Element Methods (SFEM) where a proper link between the finite element analysis of the structure and the reliability analysis methodology is ensured. In this study various algorithms are developed based on the FORM method, Monte Carlo simulation, and the Response Surface Method (RSM). Using the FORM method, two methodologies are considered: one is based on the development of a finite element code where required alterations are made to the FEM code and the other is based on the usage of a commercial FEM package. Different simulation methods are also implemented: Direct Monte Carlo Simulation (DMCS), Latin Hypercube Sampling Monte Carlo (LHCSMC), and Updated Latin Hypercube Sampling Monte Carlo (ULHCSMC). Moreover, RSM is used together with simulation methods. Throughout the thesis, the effciency of these methods was investigated. A Fully Stochastic Finite Element Method (FSFEM) with alterations to the finite element code seems the fastest approach since the linking between the FEM package and reliability analysis is avoided. Simulation methods can also be effectively used for the reliability evaluation where ULHCSMC seemed to be the most efficient method followed by LHCSMC and DMCS. The response surface method is the least straight forward method for an algorithmic component reliability analysis; however, it is useful for the system reliability evaluation. For a system level reliability analysis two methods were considered: the ß-unzipping method and the branch and bound method. The ß-unzipping method is based on a level-wise system reliability evaluation where the structure is modelled at different damaged levels according to its degree of redundancy. In each level, the so-called unzipping intervals are defined for the identification of the critical elements. The branch and bound method is based on the identification of different failure paths of the structure by the expansion of the structural failure tree. The evaluation of the damaged states for both of the methods is the same. Furthermore, both of the methods lead to the development of a parallel-series model for the structural system. The only difference between the two methods is in the search approach used for the failure sequence identification. It was shown that the ß-unzipping method provides a better algorithmic approach for evaluating the system reliability compared to the branch and bound method. Nevertheless, the branch and bound method is a more robust method in the identification of structural failure sequences. One possible way to increase the efficiency of the ß-unzipping method is to define bigger unzipping intervals in each level which can be possible through a computerised analysis. For such an analysis four major modules are required: a general intact structure module, a damaged structure module, a reliability analysis module, and a system reliability module. In this thesis different computer programs were developed for both system and component reliability analysis based on the developed algorithms. The computer programs are presented in the appendices of the thesis.
AFRIKAANSE OPSOMMING: Meeste van die veranderlikes betrokke by die ontwerp en analise van strukture is stogasties in hul aard. Om die onsekerhede betrokke in ontwerp en analise in ag te neem is dit dus van groot belang om 'n ten volle stogastiese analise te kan uitvoer op beide komponent asook stelsel vlak. In teenstelling hiermee is die gerekenariseerde analise van strukture in praktyk gebaseer op deterministiese analise wat nie suksesvol is om die stogastiese aard van ontwerp veranderlikes in ag te neem nie. Dit beteken dat die ondersoek na die algoritmiese metodiek vir komponent en stelsel betroubaarheid analise kan help om die weg te baan na die implementering van ten volle rekenaarmatige stogastiese analise van strukture. Di e studie se fokus is op die ontwikkeling van algoritmes vir komponent en stelsel betroubaarheid analise soos gegrond op verskeie voorgestelde metodes. Vakwerk strukture is gekies vir die doeleinde as gevolg van hulle eenvoud asook hulle wydverspreide gebruik in industrie. Die algoritmes wat in die studie ontwikkel is kan nietemin ook vir ander tipes strukture soos moment-vaste raamwerke gebruik word, gegewe eenvoudige aanpassings. Vir 'n komponent vlak betroubaarheid analise van strukture word verskeie metodes soos die "First Order Reliability Methods" (FORM) en simulasie metodes voorgestel. Die implementering van die metodes vir staties onbepaalbare strukture is ingewikkeld as gevolg van die implisiete verband tussen die gedrag van die struktuur stelsel en die las effek. As 'n gevolg, moet die algoritme wat ontwikkel word vir die doel van komponent betroubaarheid analise gebaseer word op die konsepte van stogastiese eindige element metodes ("SFEM") waar 'n duidelike verband tussen die eindige element analise van die struktuur en die betroubaarheid analise verseker is. In hierdie studie word verskeie algoritmes ontwikkel wat gebaseer is op die FORM metode, Monte Carlo simulasie, en die sogenaamde "Response Surface Method" (RSM). Vir die gebruik van die FORM metode word twee verdere metodologieë ondersoek: een gebaseer op die ontwikkeling van 'n eindige element kode waar nodige verandering aan die eindige element kode self gemaak word en die ander waar 'n kommersiële eindige element pakket gebruik word. Verskillende simulasie metodes word ook geïmplimenteer naamlik Direkte Monte Carlo Simulasie (DMCS), "Latin Hypercube Sampling Monte Carlo" (LHCSMC) en sogenaamde "Updated Latin Hypercube Sampling Monte Carlo" (ULHCSMC). Verder, word RSM tesame met die simulasie metodes gebruik. In die tesis word die doeltreffendheid van die bostaande metodes deurgaans ondersoek. 'n Ten volle stogastiese eindige element metode ("FSFEM") met verandering aan die eindige element kode blyk die vinnigste benadering te wees omdat die koppeling tussen die eindige element metode pakket en die betroubaarheid analise verhoed word. Simulasie metodes kan ook effektief aangewend word vir die betroubaarheid evaluasie waar ULHCSMC as die mees doeltre end voorgekom het, gevolg deur LHCSMC en DMCS. The RSM metode is die mees komplekse metode vir algoritmiese komponent betroubaarheid analise. Die metode is egter nuttig vir sisteem betroubaarheid analise. Vir sisteem-vlak betroubaarheid analise is twee metodes oorweeg naamlik die "ß-unzipping" metode and die "branch-and-bound" metode. Die "ß-unzipping" metode is gebaseer op 'n sisteem-vlak betroubaarheid ontleding waar die struktuur op verskillende skade vlakke gemodelleer word soos toepaslik vir die hoeveelheid addisionele las paaie. In elke vlak word die sogenaamde "unzipping" intervalle gedefinieer vir die identifikasie van die kritiese elemente. Die "branch-and-bound" metode is gebaseer op die identifikasie van verskillende faling roetes van die struktuur deur uitbreiding van die falingsboom. The ondersoek van die skade toestande vir beide metodes is dieselfde. Verder kan beide metodes lei tot die ontwikkeling van 'n parallelserie model van die strukturele stelsel. Die enigste verskil tussen die twee metodes is in die soek-benadering vir die uitkenning van falingsmodus volgorde. Dit word getoon dat die "ß-unzipping" metode 'n beter algoritmiese benadering is vir die ontleding van sisteem betroubaarheid vergeleke met die "branch-and-bound" metode. Die "branch-and- bound" metode word nietemin as 'n meer robuuste metode vir die uitkenning van die falings volgorde beskou. Een moontlike manier om die doeltre endheid van die "ß-unzipping" metode te verhoog is om groter "unzipping" intervalle te gebruik, wat moontlik is vir rekenaarmatige analise. Vir so 'n analise word vier hoof modules benodig naamlik 'n algemene heel-struktuur module, 'n beskadigde-struktuur module, 'n betroubaarheid analise module en 'n sisteem betroubaarheid analise module. In die tesis word verskillende rekenaar programme ontwikkel vir beide sisteem en komponent betroubaarheid analise. Die rekenaar programme word in die aanhangsels van die tesis aangebied.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

LEE, SEUNG JOO. „RELIABILITY-BASED OPTIMAL STRUCTURAL AND MECHANICAL DESIGN“. Diss., The University of Arizona, 1987. http://hdl.handle.net/10150/184136.

Der volle Inhalt der Quelle
Annotation:
Structural reliability technology provides analytical tools for management of uncertainty in all relevant design factors in structural and mechanical systems. Generally, the goal of analysis is to compute probabilities of failure in structural components or system having single or multiple failure mode. Alternately, modern optimization methods provide efficient numerical algorithms for locating optima, particularly in large-scale systems having prescribed deterministic constraints. Optimization procedure can accommodate random variables either directly in its objective function or as one of the primary constraints. The combination of elementary optimization and probabilistic design techniques is the subject of this study. Presented herein is a general strategy for optimization when the design factors are random variables and some or all of the constraints are probability statements. A literature review has indicated that optimization technology in a reliability context has not been fully explored for the general case of nonlinear performance functions and nonnormal variates associated multiple failure modes. This research focuses upon development of the theory to address this general problem. Because analysis algorithms are complicated, a computer code, program RELOPT, is constructed to automate the analysis. The objective function to be minimized is arbitrary, but would generally be the total expected lifetime costs including all initial costs as well as all costs associated with failure. Uncertainty is assumed to be possible in all design factors (including the factors to be determined), and they are modeled as random variables. In general, all of the constraints can be probability statements. The generalized reduce gradient (GRG) method was used for optimization calculations. Options for point probability calculations are first order reliability analysis using the Rackwitz-Fiessler (R-F) or advanced reliability analysis using Wu/FPI. For system reliability analysis either the first order Cornell's bounds or the second order Ditlevsen's bounds can be specified. Several examples are presented to illustrate the full range of capabilities of RELOPT. The program is validated by checking with independent and exact solutions. An example is provided which demonstrates that the cost of running RELOPT can be substantial as the size of the problem increases.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Jiang, Yu, und 姜宇. „Reliability-based transit assignment : formulations, solution methods, and network design applications“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/207991.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Kim, Injoong. „Development of a knowledge model for the computer-aided design for reliability of electronic packaging systems“. Diss., Atlanta, Ga. : Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/22708.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Mechanical Engineering, Georgia Institute of Technology, 2008.
Committee Co-Chair: Peak, Russell; Committee Co-Chair: Sitaraman, Suresh; Committee Member: Paredis, Christiaan; Committee Member: Pucha, Raghuram; Committee Member: Wong, C.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

O'Reilly, Małgorzata Marzena. „Necessary conditions for the variant optimal design of linear consecutive systems“. Title page, contents and summary only, 2001. http://web4.library.adelaide.edu.au/theses/09PH/09pho668.pdf.

Der volle Inhalt der Quelle
Annotation:
"October 2001." Bibliography: leaves 99-103. Establishes several sets of conditioning relating to the variant optimal deign of linear consecutive-k-out-of-n systems and includes a review of existing research in the theory of variant optimal design of linear consecutive-k-out-of-n systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Malada, Awelani. „Stochastic reliability modelling for complex systems“. Thesis, Pretoria : [s.n.], 2006. http://upetd.up.ac.za/thesis/available/etd-10182006-170927.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Torng, Tony Yi. „Reliability analysis of maintained structural system vulnerable to fatigue and fracture“. Diss., The University of Arizona, 1989. http://hdl.handle.net/10150/184955.

Der volle Inhalt der Quelle
Annotation:
Metallic structures dominated by tensile loads are vulnerable to fatigue and fracture. Fatigue is produced by oscillatory loads. Quasi-static brittle or ductile fracture can result from a "large" load in the random sequence. Moreover, a fatigue or fracture failure in a member of a redundant structure produces impulsive redistributed loads to the intact members. These transient loads could produce a sequence of failures resulting in progressive collapse of the system. Fatigue and fracture design factors are subject to considerable uncertainty. Therefore, a probabilistic approach, which includes a system reliability assessment, is appropriate for design purposes. But system reliability can be improved by a maintenance program of periodic inspection with repair and/or replacement of damaged members. However, a maintenance program can be expensive. The ultimate goal of the engineer is to specify a design, inspection, and repair strategy to minimize life cycle costs. The fatigue/fracture reliability and maintainability (FRM) process for redundant structure can be a complicated random process. The structural model considered series, parallel, and parallel/series systems of elements. Applied to the system are fatigue loads including mean stress, an extreme load, as well as impulsive loads in parallel member systems. The failure modes are fatigue, brittle and ductile fracture. A refined fatigue model is employed which includes both the crack initiation and propagation phases. The FRM process cannot be solved easily using recently developed advanced structural reliability techniques. A "hybrid" simulation method which combines modified importance sampling (MIS) with inflated stress extrapolation (ISE) is proposed. MIS and ISE methods are developed and demonstrated using numerous examples which include series, parallel and series/parallel systems. Not only reasonable estimates of the probability of system failure but also an estimate of the distribution of time to system failure can be obtained. The time to failure distribution can be used to estimate the reliability function, hazard function, conditional reliability given survival at any time, etc. The demonstration cases illustrate how reliability of a system having given material properties is influenced by the number of series and parallel elements, stress level, mean stress, and various inspection/repair policies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Yim, Ka-wing, und 嚴家榮. „A reliability-based land use and transportation optimization model“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B34618879.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Silva, Odair Jacinto da 1967. „Sensibilidade a variações de perfil operacional de dois modelos de confiabilidade de software baseados em cobertura“. [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259616.

Der volle Inhalt der Quelle
Annotation:
Orientadores: Mario Jino, Adalberto Nobiato Crespo
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-25T04:04:06Z (GMT). No. of bitstreams: 1 Silva_OdairJacintoda_M.pdf: 2238493 bytes, checksum: 120710575da3bbe9052b22a2df5a3a07 (MD5) Previous issue date: 2014
Resumo: Diversos estudos publicados indicam que a capacidade preditiva dos modelos de confiabilidade de software, que utilizam a informação da cobertura observada durante os testes, é melhor do que a capacidade preditiva dos modelos baseados no domínio do tempo. E, por isso, têm sido propostos por pesquisadores da área como uma alternativa aos modelos baseados no domínio do tempo. Entretanto, para chegar a uma conclusão sobre a superioridade desta classe de modelos é necessário avaliar a sua sensibilidade a variações do perfil operacional. Uma qualidade desejável dos modelos de confiabilidade de software é a de que sua capacidade preditiva não seja afetada por variações no perfil operacional de um software. Esta dissertação avalia, por meio de um experimento, o comportamento de dois modelos de confiabilidade de software que se baseiam na informação de cobertura do código: "Modelo Binomial Baseado em Cobertura" e "Modelo de Falhas Infinitas Baseado em Cobertura". O experimento aplica os modelos nos dados de falhas observados durante a execução de um programa em três perfis operacionais estatisticamente distintos. Adicionalmente, seis modelos de confiabilidade de software tradicionais são utilizados para estimar a confiabilidade do software utilizando os mesmos dados de falhas. Os modelos escolhidos foram: Musa-Okumoto, Musa Básico, Littlewood-Verral Linear, Littlewood-Verral Quadrático, Jelinski-Moranda e Geométrico. Os resultados mostram que a capacidade preditiva dos modelos "Modelo Binomial Baseado em Cobertura" e "Modelo de Falhas Infinitas Baseado em Cobertura" não é afetada com a variação do perfil operacional do software. O mesmo resultado não foi observado nos modelos de confiabilidade de software baseados no domínio do tempo, ou seja, a alteração do perfil operacional influencia a capacidade preditiva desses modelos. Um resultado observado, por exemplo, é de que nenhum dos modelos tradicionais pôde ser utilizado para estimar a confiabilidade do software aplicando os dados de falhas gerados por um dos perfis operacionais
Abstract: Several published studies indicate that the predictive ability of the software reliability models using test coverage information observed during the tests is better than the predictive ability of models based on time domain. And, therefore, have been proposed by researchers as an alternative to models based on time domain. However, to reach a conclusion about the superiority of this class of models is necessary to evaluate their sensitivity to variations in operational profile. A desirable quality of software reliability models is that their predictive ability is not affected by variations in the operational profile of a program. This dissertation analyzes by means of an experiment, the sensitivity of two software reliability models based on code coverage information: "Binomial Model Based on Coverage" and "Infinite Failure Model Based on Coverage". The experiment applies the models to data failures observed during the execution of a program according to three statistically distinct operational profiles. Additionally, six traditional software reliability models were used to estimate the reliability using the same software failure data. The models selected were: Musa-Okumoto, Musa Basic, Littlewood-Verrall Linear, Quadratic Littlewood-Verrall, Jelinski-Moranda and Geometric. The results show that the predictive ability of the models "Binomial Model Based on Coverage" and "Infinite Failure Model Based on Coverage" is not affected by varying the operational profile of the software. The same result was not observed in software reliability models based on time domain, i.e., changing the operational profile influences the predictive ability of these models. A result observed for example is that none of the traditional models could be used to estimate the software reliability using the fault data set generated by one of the operational profiles
Mestrado
Engenharia de Computação
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Blakely, Scott. „Probabilistic Analysis for Reliable Logic Circuits“. PDXScholar, 2014. https://pdxscholar.library.pdx.edu/open_access_etds/1860.

Der volle Inhalt der Quelle
Annotation:
Continued aggressive scaling of electronic technology poses obstacles for maintaining circuit reliability. To this end, analysis of reliability is of increasing importance. Large scale number of inputs and gates or correlations of failures render such analysis computationally complex. This paper presents an accurate framework for reliability analysis of logic circuits, while inherently handling reconvergent fan-out without additional complexity. Combinational circuits are modeled stochastically as Discrete-Time Markov Chains, where propagation of node logic levels and error probability distributions through circuitry are used to determine error probabilities at nodes in the circuit. Model construction is scalable, as it is done so on a gate-by-gate basis. The stochastic nature of the model lends itself to allow various properties of the circuit to be formally analyzed by means of steady-state properties. Formal verifying the properties against the model can circumvent strenuous simulations while exhaustively checking all possible scenarios for given properties. Small combinational circuits are used to explain model construction, properties are presented for analysis of the system, more example circuits are demonstrated, and the accuracy of the method is verified against an existing simulation method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Nowicki, David R. „Reliability allocation and apportionment : addressing redundancy and life-cycle cost /“. Thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-08042009-040416/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Seward, Lori Welte. „A multiple stress, multiple component stress screening cost model“. Thesis, Virginia Tech, 1985. http://hdl.handle.net/10919/41578.

Der volle Inhalt der Quelle
Annotation:

Environmental stress screening is used to enhance reliability by decreasing the number of failures experienced during customer use. It is suggested that added benefit can be gained by applying multiple stresses rather than a single stress, as is done presently. A further modification is to apply the stress at the assembly level, accelerating different types of components at the same time. Different component E A e acceleration effects must then be considered.

The problem these modifications present is how to choose the appropriate stress levels and the time duration of the stress screen. A cost model is developed that trades off the cost of a field failure with the cost of applying a multiple stress, multiple component stress screen. The objective is to minimize this cost function in order to find an economical stress regimen.

The problem is solved using the software package GINO. The interesting result is that if a stress is used at all during the stress screen, the maximum amount of stress is the economic choice. Either the cost of stressing is low enough to justify the use of a stress, in which case the maximum amount of stress is used, or the cost is too high and the stress is not used at all.


Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Park, Byung-Goo. „A system-level testability allocation model /“. free to MU campus, to others for purchase, 1997. http://wwwlib.umi.com/cr/mo/fullcit?p9842588.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Eusuff, M. Muzaffar. „Optimisation of an operating policy for variable speed pumps using genetic algorithms“. Title page, contents and abstract only, 1995. http://web4.library.adelaide.edu.au/theses/09ENS/09ense91.pdf.

Der volle Inhalt der Quelle
Annotation:
Undertaken in conjunction with JUMP (Joint Universities Masters Programme in Hydrology and Water Resources). Bibliography: leaves 76-83. Establishes a methodology using genetic algorithms to find the optimum operating policy for variable speed pumps in a water supply network over a period of 24 hours.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Cross, Richard J. (Richard John). „Inference and Updating of Probabilistic Structural Life Prediction Models“. Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19828.

Der volle Inhalt der Quelle
Annotation:
Aerospace design requirements mandate acceptable levels of structural failure risk. Probabilistic fatigue models enable estimation of the likelihood of fatigue failure. A key step in the development of these models is the accurate inference of the probability distributions for dominant parameters. Since data sets for these inferences are of limited size, the fatigue model parameter distributions are themselves uncertain. A hierarchical Bayesian approach is adopted to account for the uncertainties in both the parameters and their distribution. Variables specifying the distribution of the fatigue model parameters are cast as hyperparameters whose uncertainty is modeled with a hyperprior distribution. Bayes' rule is used to determine the posterior hyperparameter distribution, given available data, thus specifying the probabilistic model. The Bayesian formulation provides an additional advantage by allowing the posterior distribution to be updated as new data becomes available through inspections. By updating the probabilistic model, uncertainty in the hyperparameters can be reduced, and the appropriate level of conservatism can be achieved. In this work, techniques for Bayesian inference and updating of probabilistic fatigue models for metallic components are developed. Both safe-life and damage-tolerant methods are considered. Uncertainty in damage rates, crack growth behavior, damage, and initial flaws are quantified. Efficient computational techniques are developed to perform the inference and updating analyses. The developed capabilities are demonstrated through a series of case studies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Chen, Qing. „Reliability-based structural design: a case of aircraft floor grid layout optimization“. Thesis, Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39630.

Der volle Inhalt der Quelle
Annotation:
In this thesis, several Reliability-based Design Optimization (RBDO) methods and algorithms for airplane floor grid layout optimization are proposed. A general RBDO process is proposed and validated by an example. Copula as a mathematical method to model random variable correlations is introduced to discover the correlations between random variables and to be applied in producing correlated data samples for Monte Carlo simulations. Based on Hasofer-Lind (HL) method, a correlated HL method is proposed to evaluate a reliability index under correlation. As an alternative method for computing a reliability index, the reliability index is interpreted as an optimization problem and two nonlinear programming algorithms are introduced to evaluate reliability index. To evaluate the reliability index by Monte Carlo simulation in a time efficient way, a kriging-based surrogate model is proposed and compared to the original model in terms of computing time. Since in RBDO optimization models the reliability constraint obtained by MCS does not have an analytical form, a kriging-based response surface is built. Kriging-based response surface models are usually segment functions that do not have a uniform expression over the design space; however, most optimization algorithms require a uniform expression for constraints. To solve this problem, a heuristic gradient-based direct searching algorithm is proposed. These methods and algorithms, together with the RBDO general process, are applied to the layout optimization of aircraft floor grid structural design.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Wang, Ni. „Statistical Learning in Logistics and Manufacturing Systems“. Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11457.

Der volle Inhalt der Quelle
Annotation:
This thesis focuses on the developing of statistical methodology in reliability and quality engineering, and to assist the decision-makings at enterprise level, process level, and product level. In Chapter II, we propose a multi-level statistical modeling strategy to characterize data from spatial logistics systems. The model can support business decisions at different levels. The information available from higher hierarchies is incorporated into the multi-level model as constraint functions for lower hierarchies. The key contributions include proposing the top-down multi-level spatial models which improve the estimation accuracy at lower levels; applying the spatial smoothing techniques to solve facility location problems in logistics. In Chapter III, we propose methods for modeling system service reliability in a supply chain, which may be disrupted by uncertain contingent events. This chapter applies an approximation technique for developing first-cut reliability analysis models. The approximation relies on multi-level spatial models to characterize patterns of store locations and demands. The key contributions in this chapter are to bring statistical spatial modeling techniques to approximate store location and demand data, and to build system reliability models entertaining various scenarios of DC location designs and DC capacity constraints. Chapter IV investigates the power law process, which has proved to be a useful tool in characterizing the failure process of repairable systems. This chapter presents a procedure for detecting and estimating a mixture of conforming and nonconforming systems. The key contributions in this chapter are to investigate the property of parameter estimation in mixture repair processes, and to propose an effective way to screen out nonconforming products. The key contributions in Chapter V are to propose a new method to analyze heavily censored accelerated life testing data, and to study the asymptotic properties. This approach flexibly and rigorously incorporates distribution assumptions and regression structures into estimating equations in a nonparametric estimation framework. Derivations of asymptotic properties of the proposed method provide an opportunity to compare its estimation quality to commonly used parametric MLE methods in the situation of mis-specified regression models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Sorooshian, Soroosh, und Vijai Kumar Gupta. „Improving the Reliability of Compartmental Models: Case of Conceptual Hydrologic Rainfall-Runoff Models“. Department of Hydrology and Water Resources, University of Arizona (Tucson, AZ), 1986. http://hdl.handle.net/10150/614011.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Clavareau, Julien. „Modélisation des stratégies de remplacement de composant et de systèmes soumis à l'obsolescence technologique“. Doctoral thesis, Universite Libre de Bruxelles, 2008. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210482.

Der volle Inhalt der Quelle
Annotation:
Ce travail s’inscrit dans le cadre d’étude de la sûreté de fonctionnement.

La sûreté de fonctionnement est progressivement devenue partie intégrante de l’évaluation des performances des systèmes industriels. En effet, les pannes d’équipements, les pertes de production consécutives, et la maintenance des installations ont un impact économique majeur dans les entreprises. Il est donc essentiel pour un manager de pouvoir estimer de manière cohérente et réaliste les coûts de fonctionnement de l’entreprise, en tenant notamment compte des caractéristiques fiabilistes des équipements utilisés, ainsi que des coûts induits entre autres par le non-fonctionnement du système et la restauration des performances de ses composants après défaillance.

Le travail que nous avons réalisé dans le cadre de ce doctorat se concentre sur un aspect particulier de la sûreté de fonctionnement, à savoir les politiques de remplacement d’équipements basées sur la fiabilité des systèmes qu’ils constituent. La recherche menée part de l’observation suivante :si la littérature consacrée aux politiques de remplacement est abondante, elle repose généralement sur l’hypothèse implicite que les nouveaux équipements envisagés présentent les mêmes caractéristiques et performances que celles que possédaient initialement les composants objets du remplacement.

La réalité technologique est souvent bien différente de cette approche, quelle que soit la discipline industrielle envisagée. En effet, de nouveaux équipements sont régulièrement disponibles sur le marché ;ils assurent les mêmes fonctions que des composants plus anciens utilisés par une entreprise, mais présentent de meilleures performances, par exemple en termes de taux de défaillance, consommation d’énergie, " intelligence " (aptitude à transmettre des informations sur leur état de détérioration).

De plus, il peut devenir de plus en plus difficile de se procurer des composants de l’ancienne génération pour remplacer ceux qui ont été déclassés. Cette situation est généralement appelée obsolescence technologique.

Le but de ce travail est de prolonger et d’approfondir, dans le cadre de la sûreté de fonctionnement, les réflexions engagées par les différents articles présentés dans la section état de l’art afin de définir et de modéliser des stratégies de remplacements d’équipements soumis à obsolescence technologique. Il s’agira de proposer un modèle, faisant le lien entre les approches plus économiques et celles plus fiabilistes, permettant de définir et d’évaluer l’efficacité, au sens large, des différentes stratégies de remplacement des unités obsolètes. L’efficacité d’une stratégie peut se mesurer par rapport à plusieurs critères parfois contradictoires. Parmi ceux-ci citons, évidemment, le coût total moyen engendré par la stratégie de remplacement, seul critère considéré dans les articles cités au chapitre 2, mais aussi la façon dont ces coûts sont répartis au cours du temps tout au long de la stratégie, la variabilité de ces coûts autour de leur moyenne, le fait de remplir certaines conditions comme par exemple d’avoir remplacé toutes les unités d’une génération par des unités d’une autre génération avant une date donnée ou de respecter certaines contraintes sur les temps de remplacement.

Pour arriver à évaluer les différentes stratégies, la première étape sera de définir un modèle réaliste des performances des unités considérées, et en particulier de leur loi de probabilité de défaillance. Etant donné le lien direct entre la probabilité de défaillance d’un équipement et la politique de maintenance qui lui est appliquée, notamment la fréquence des maintenances préventives, leur effet, l’effet des réparations après défaillance ou les critères de remplacement de l’équipement, un modèle complet devra considérer la description mathématique des effets des interventions effectuées sur les équipements. On verra que la volonté de décrire correctement les effets des interventions nous a amené à proposer une extension des modèles d’âge effectif habituellement utilisés dans la littérature.

Une fois le modèle interne des unités défini, nous développerons le modèle de remplacement des équipements obsolètes proprement dit.

Nous appuyant sur la notion de stratégie K proposée dans de précédents travaux, nous verrons comment adapter cette stratégie K à un modèle pour lequel les temps d’intervention ne sont pas négligeables et le nombre d’équipes limité. Nous verrons aussi comment tenir compte dans le cadre de cette stratégie K d’une part des contraintes de gestion d’un budget demandant en général de répartir les coûts au cours du temps et d’autre part de la volonté de passer d’une génération d’unités à l’autre en un temps limité, ces deux conditions pouvant être contradictoires.

Un autre problème auquel on est confronté quand on parle de l’obsolescence technologique est le modèle d’obsolescence à adopter. La manière dont on va gérer le risque d’obsolescence dépendra fortement de la manière dont on pense que les technologies vont évoluer et en particulier du rythme de cette évolution. Selon que l’on considère que le temps probable d’apparition d’une nouvelle génération est inférieur au temps de vie des composants ou supérieur à son temps de vie les solutions envisagées vont être différentes. Lors de deux applications numériques spécifiques.

Nous verrons au chapitre 12 comment envisager le problème lorsque l’intervalle de temps entre les différentes générations successives est largement inférieur à la durée de vie des équipements et au chapitre 13 comment traiter le problème lorsque le délai entre deux générations est de l’ordre de grandeur de la durée de vie des équipements considérés.

Le texte est structuré de la manière suivante :Après une première partie permettant de situer le contexte dans lequel s’inscrit ce travail, la deuxième partie décrit le modèle interne des unités tel que nous l’avons utilisé dans les différentes applications. La troisième partie reprend la description des stratégies de remplacement et des différentes applications traitées. La dernière partie permet de conclure par quelques commentaires sur les résultats obtenus et de discuter des perspectives de recherche dans le domaine.


Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Brunelle, Russell Dedric. „Customer-centered reliability measures for flexible multistate reliability models /“. Thesis, Connect to this title online; UW restricted, 1998. http://hdl.handle.net/1773/10691.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Zhu, Wenjin. „Maintenance of monitored systems with multiple deterioration mechanisms in dynamic environments : application to wind turbines“. Thesis, Troyes, 2014. http://www.theses.fr/2014TROY0005/document.

Der volle Inhalt der Quelle
Annotation:
Les travaux présentés contribuent à la modélisation stochastique de la maintenance de systèmes mono- ou multi-composants à détériorations et à modes de défaillances multiples en environnement dynamique. Dans ce cadre, les contributions portent d'une part sur la modélisation des processus de défaillance, et d'autre part sur la proposition de structures de décision de maintenance intégrant les différents types d'information de surveillance en ligne disponible sur le système (état de détérioration mesuré ou reconstruit, état de l'environnement, ...) et le développement des modèles mathématiques d'évaluation associés. Les modèles de détérioration et de défaillances proposés pour les systèmes mono-composants permettent de rendre compte de sources de détérioration multiples (chocs et détérioration graduelle) et d'intégrer les effets de l'environnement sur la dégradation. Pour les systèmes multi-composants, on insiste sur les risques concurrents, indépendants ou dépendants et sur l'intégration de l'environnement. Les modèles de maintenance développés sont adaptés aux modèles de détérioration proposés et permettent de prendre en compte la contribution de chaque source de détérioration dans la décision de maintenance, ou d'intégrer de l'information de surveillance indirecte dans la décision, ou encore de combiner plusieurs types d'actions de maintenance. Dans chaque cas, on montre comment les modèles développés répondent aux problématiques de la maintenance de turbines et de parcs éoliens
The thesis contributes to stochastic maintenance modeling of single or multi-components deteriorating systems with several failure modes evolving in a dynamic environment. In one hand, the failure process modeling is addressed and in the other hand, the thesis proposes maintenance decision rules taking into account available on-line monitoring information (system state, deterioration level, environmental conditions …) and develops mathematical models to measure the performances of the latter decision rules.In the framework of single component systems, the proposed deterioration and failure models take into account several deterioration causes (chocks and wear) and also the impact of environmental conditions on the deterioration. For multi-components systems, the competing risk models are considered and the dependencies and the impact of the environmental conditions are also studied. The proposed maintenance models are suitable for deterioration models and permit to consider different deterioration causes and to analyze the impact of the monitoring on the performances of the maintenance policies. For each case, the interest and applicability of models are analyzed through the example of wind turbine and wind turbine farm maintenance
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Mattos, Carlos Eduardo Lourenço. „Metodologia de ensaio de fluência em cabos de transmissão de energia elétrica“. Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1360.

Der volle Inhalt der Quelle
Annotation:
A construção de linhas de transmissão são obras de grande impacto ambiental e requerem grandes investimentos financeiros. Este projeto de pesquisa, visa por meio de ensaios de laboratório, conceber uma ferramenta sistêmica para o aperfeiçoamento do método de determinação da fluência em cabos condutores de energia e OPGW (Optical Ground Wire), que tem sido utilizada no Brasil durante os últimos 30 anos, e analisar os efeitos na construção das linhas aéreas de transmissão. Visa também, proporcionar ganhos de confiabilidade ao sistema de transmissão de energia elétrica, pois o projeto, a construção, a operação e a manutenção de linhas de transmissão dependem de parâmetros de desempenho mecânico dos condutores. Conclui–se que a metodologia de ensaio proposta proporciona resultados finais mais confiáveis quando comparado com o atual procedimento normatizado, utilizado no Brasil, e sua utilização em projetos de linhas aéreas de transmissão de energia podem reduzir custos de construção, aumentar a ampacidade das linhas já existentes, bem como, diminuir os riscos ao a que pessoas estão sujeitas quando expostas a campos elétricos e eletromagnéticos gerados por linhas de transmissão.
Overhead Transmission Line construction projects have a great environmental impact and require a large financial investment. This research aims, through laboratory tests, to improve the method of determining the creep of power cables and OPGW (Optical Ground Wire) that has been utilized in Brazil for the last thirty years, as well as analyze the effects on the construction of overhead transmission lines. It will provide greater reliability to the overhead transmission line system, since the design, construction, operation and maintenance of transmission lines depend on the mechanical performance of the conductors. In conclusion, the proposed methodology provides more reliable final results compared to the current standardized procedure and its use in overhead transmission line projects could reduce construction costs, increase the ampacity, as well as reduce the risks to which people are subject when exposed to electric and electromagnetic fields generated by power lines.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Rodriguez, obando Diego Jair. „From Deterioration Modeling to Remaining Useful Life Control : a comprehensive framework for post-prognosis decision-making applied to friction drive systems“. Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAT086/document.

Der volle Inhalt der Quelle
Annotation:
La durée de vie utile résiduelle (RUL) peut être simplement définie comme une prédiction du temps restant pendant lequel un système est capable d'exécuter sa fonction prévue ; elle est mesurée à partir de l'instant présent jusqu'à la défaillance finale. Cette durée prévue dépend principalement de l'état de détérioration des composants du système et de leurs conditions de fonctionnement futures prévues. Ainsi, la prédiction de la RUL est un processus incertain et son contrôle n'est pas une tâche triviale. En général, le but de la prévision de la RUL est d'influencer la prise de décision pour le système. Dans cette thèse, on a présenté un cadre compréhensible pour le contrôle de la RUL. Les incertitudes du modèle ainsi que les perturbations du système ont été prises en compte dans le cadre proposé. Des questions telles que le traitement de l'incertitude et l'inclusion d'objectifs RUL dans la stratégie de contrôle sont étudiées, depuis la modélisation jusqu'à une architecture de contrôle globale finale. On a montré que l'on peut prédire la RUL à partir d'une estimation appropriée de la détérioration et d'hypothèses sur les conditions de fonctionnement futures. Les systèmes d'entraînement par friction sont utilisés pour illustrer l'utilité de l'architecture globale susmentionnée. Pour ce type de système, le frottement est à la fois source du mouvement et source de la détérioration. Ce double caractéristique de frottement est une motivation pour contrôler automatiquement la détérioration du système en maintenant un compromis entre les exigences de mouvement et les valeurs RUL souhaitées. Dans cette thèse, un nouveau modèle orienté contrôle pour les systèmes d'entraînement par friction, qui inclut un modèle dynamique de la détérioration, est proposé. Le degré de détérioration est considéré en fonction de l'énergie dissipée, à la surface de contact, pendant la transmission mécanique de puissance. Une approche est proposée pour estimer l'état actuel de la détérioration d'un système d'entraînement par friction. L'approche est basée sur un Filtre de Kalman Etendu (EKF en anglais) qui utilise un modèle augmenté incluant le système mécanique dynamique et la dynamique de détérioration. L'EKF fournit également des intervalles qui incluent sûrement la valeur de détérioration réelle avec une valeur de probabilité. Une nouvelle architecture de commande de la RUL est proposée, elle comprend : un système de surveillance de l'état de détérioration (par exemple l'EKF proposé), un estimateur de l'état de fonctionnement du système, un système de commande de la RUL et un principe actionneur de la RUL. L'estimateur des conditions de fonctionnement est basé sur l'hypothèse qu'il est possible de quantifier certaines caractéristiques des exigences de mouvement, par exemple le rapport cyclique des couples moteur. Le contrôleur RUL utilise une fonction de coût qui pondère les exigences de mouvement et les valeurs RUL souhaitées pour modifier un filtre à paramètres variables, utilisé ici comme principe actionneur RUL. Le principe actionneur RUL est basé sur une modification des couples exigés, provenant d'un éventuel système de contrôle de mouvement. Les résultats préliminaires montrent qu'il est possible de contrôler la RUL, selon le cadre théorique proposé
Remaining Useful Lifetime (RUL) can be simply defined as a prediction of the remaining time that a system is able to perform its intended function, from the current time to the final failure. This predicted time mostly depends on the state of deterioration of the system components and their expected future operating conditions. Thus, the RUL prediction is an uncertain process and its control is not trivial task.In general, the purpose for predicting the RUL is to influence decision-making for the system. In this dissertation a comprehensive framework for controlling the RUL is presented. Model uncertainties as well as system disturbances have been considered into the proposed framework. Issues as uncertainty treatment and inclusion of RUL objectives in the control strategy are studied from the modeling until a final global control architecture. It is shown that the RUL can be predicted from a suitable estimation of the deterioration, and from hypothesis on the future operation conditions. Friction drive systems are used for illustrating the usefulness of the aforementioned global architecture. For this kind of system, the friction is the source of motion and at the same time the source of deterioration. This double characteristic of friction is a motivation for controlling automatically the deterioration of the system by keeping a trade-off, between motion requirements and desired RUL values. In this thesis, a new control-oriented model for friction drive systems, which includes a dynamical model of the deterioration is proposed. The amount of deterioration has been considered as a function of the dissipated energy, at the contact surface, during the mechanical power transmission. An approach to estimate the current deterioration condition of a friction drive system is proposed. The approach is based on an Extended Kalman Filter (EKF) which uses an augmented model including the mechanical dynamical system and the deterioration dynamics. At every time instant, the EKF also provides intervals which surely includes the actual deterioration value which a given probability. A new architecture for controlling the RUL is proposed, which includes: a deterioration condition monitoring system (for instance the proposed EKF), a system operation condition estimator, a RUL controller system, and a RUL actuation principle. The operation condition estimator is based on the assumption that it is possible quantify certain characteristics of the motion requirements, for instance the duty cycle of motor torques. The RUL controller uses a cost function that weights the motion requirements and the desired RUL values to modify a varying-parameter filter, used here as the RUL-actuating-principle. The RUL-actuating-principle is based on a modification of the demanded torques, coming from a possible motion controller system. Preliminary results show that it is possible to control de RUL according to the proposed theoretical framework
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Lin, Daming. „Reliability growth models and reliability acceptance sampling plans from a Bayesian viewpoint /“. Hong Kong : University of Hong Kong, 1995. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13999618.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

林達明 und Daming Lin. „Reliability growth models and reliability acceptance sampling plans from a Bayesian viewpoint“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B3123429X.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Gong, Zitong. „Calibration of expensive computer models using engineering reliability methods“. Thesis, University of Liverpool, 2018. http://livrepository.liverpool.ac.uk/3028587/.

Der volle Inhalt der Quelle
Annotation:
The prediction ability of complex computer models (also known as simulators) relies on how well they are calibrated to experimental data. History Matching (HM) is a form of model calibration for computationally expensive models. HM sequentially cuts down the input space to find the fitting input domain that provides a reasonable match between model output and experimental data. A considerable number of simulator runs are required for typical model calibration. Hence, HM involves Bayesian emulation to reduce the cost of running the original model. Despite this, the generation of samples from the reduced domain at every iteration has remained an open and complex problem: current research has shown that the fitting input domain can be disconnected, with nontrivial topology, or be orders of magnitude smaller than the original input space. Analogous to a failure set in the context of engineering reliability analysis, this work proposes to use Subset Simulation - a widely used technique in engineering reliability computations and rare event simulation - to generate samples on the reduced input domain. Unlike Direct Monte Carlo, Subset Simulation progressively decomposes a rare event, which has a very small probability of occurrence, into sequential less rare nested events. The original Subset Simulation uses a Modified Metropolis algorithm to generate the conditional samples that belong to intermediate less rare events. This work also considers different Markov Chain Monte Carlo algorithms and compares their performance in the context of expensive model calibration. Numerical examples are provided to show the potential of the embedded Subset Simulation sampling schemes for HM. The 'climb-cruise engine matching' illustrates that the proposed HM using Subset Simulation can be applied to realistic engineering problems. Considering further improvements of the proposed method, a classification method is used to ensure that the emulation on each disconnected region gets updated. Uncertainty quantification of expert-estimated correlation matrices helps to identify a mathematically valid (positive semi-definite) correlation matrix between resulting inputs and observations. Further research is required to explicitly address the model discrepancy as well as to take the correlation between model outputs into account.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Higgins, Andrew. „Optimisation of train schedules to minimise transit time and maximise reliability“. Thesis, Queensland University of Technology, 1996. https://eprints.qut.edu.au/107082/1/T%28S%29%20118%20Optimisation%20of%20train%20schedules%20to%20minimise%20transit%20time%20and%20maximise%20reliability.pdf.

Der volle Inhalt der Quelle
Annotation:
The overall performance of a train schedule is measured in terms of the mean and variance of train lateness (reliability) as well as the travel time of individual trains. The concept is a critical performance measure for both urban and non-urban rail passenger services, as well as rail freight transportation. This thesis deals with the scheduling of trains on single track corridors, so as to minimise train trip times and maximise reliability of train arrival times. A method to quantify the amount of risk of delay associated with each train, each track segment, and the schedule as a whole, is put forward and used as the reliability component of the constrained optimisation model. As well as for schedule optimisation, the risk of delay model can be applied to the prioritisation of investment projects designed to improve timetable reliability. Comparisons can be made between track, terminal and rolling stock projects, in terms of their likely impact on timetable reliability. The thesis also describes a number of solution techniques for the scheduling problem. New lower bounds for the branch and bound technique are presented which allow solutions for reasonable size train scheduling problems to be determined efficiently. Three solution heuristic techniques are applied to the train scheduling problem, namely: a local search heuristic with an improved neighbourhood structure; genetic algorithms with an efficient string representation; and tabu search. Comparisons in terms of the number of calculations and solution quality are made between the heuristic and branch and bound techniques. The branch and bound technique with the best lower bound out performed genetic algorithms and tabu search for all except the largest size problems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Vu, Hai Canh. „Stratégies de regroupement pour la maintenance des systèmes à composants multiples avec structure complexe“. Thesis, Troyes, 2015. http://www.theses.fr/2015TROY0008/document.

Der volle Inhalt der Quelle
Annotation:
Au cours des dernières décennies, avec un fort développement de l'économie mondiale et des nouvelles technologies, la structure des systèmes industriels devient de plus en plus complexe. Elle peut être une combinaison des structures élémentaires telles que structures séries, structures parallèles, structures séries-parallèles, etc. Dans la littérature, la plupart des travaux s'intéressent à développer des stratégies de regroupement en considérant des structures séries. Cette hypothèse est parfois très pénalisée et se limite l'application de ces stratégies dans la réalité. C'est pourquoi, l'objectif principal de cette thèse est de développer des stratégies de regroupement (dynamique et stationnaire) pour la maintenance des systèmes multi-composants avec structure complexe. Ces stratégies développées se basent sur des modèles de durée de vie avec la durée de maintenance non-négligeable. De plus, les conditions dynamiques (contextes dynamiques) telles que opportunités de faire la maintenance, changements de la structure, etc. sont étudiées et intégrées dans la planification de maintenance. Les études montrent la nécessité et les difficultés de la prise en compte de la structure complexe dans les décisions de maintenance. Exemples numériques confirment les avantages de nos stratégies de maintenance en comparant avec les autres stratégies existantes dans la littérature
In the recent decades, with a strong development of the global economy and new technologies, the structure of industrial systems is more and more complex. It can be a combination of elementary structures such as series structures, parallel structures, series-parallel structures, etc. In the literature, the most work focused on developing grouping strategies by considering series structures. This assumption is sometimes much penalized and limited the application of these strategies in reality. Therefore, the main objective of this thesis is to develop dynamic and stationary grouping strategies for the maintenance of multi-component systems with complex structure. These strategies have been developed for age-based models with non-negligible maintenance durations. In addition, dynamic conditions (dynamic context) such as maintenance opportunities, changes of the structure, etc., are considered and integrated into the maintenance scheduling.Our studies show the necessity and the difficulties of taking into account of the complex structure in the maintenance decisions. Numerical examples confirm the advantages of our maintenance strategies by comparing with other existing strategies in the literature
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Fagundo, Arturo. „Hierarchical modeling for reliability analysis using Markov models“. Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/35417.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Reller, Susan R. „Reliability diagnostic strategies for series systems under imperfect testing“. Thesis, Virginia Tech, 1987. http://hdl.handle.net/10919/45926.

Der volle Inhalt der Quelle
Annotation:
An expected cost model was developed for failure detection in series systems under imperfect testing. Type I and type II error probabilities are included and single-pass sample paths are required. The model accounts for the expected costs of testing components, false positive termination, and no-defect-found outcomes. Based on the model, a heuristic was developed to construct the cost minimizing testing sequence. The heuristic algorithm utilizes elementary arithmetic computations and has been successfully applied to a variety of problems. Furthermore, the algorithm appears to be globally convergent. Choice of a starting solution affects the rate of convergence, and guidelines for selecting the starting solution were discussed. Implementation of the heuristic was illustrated by numerical example.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Er, Kim Hua. „Analysis of the reliability disparity and reliability growth analysis of a combat system using AMSAA extended reliability growth models“. Thesis, Monterey, California. Naval Postgraduate School, 2005. http://hdl.handle.net/10945/1788.

Der volle Inhalt der Quelle
Annotation:
The first part of this thesis aims to identify and analyze what aspects of the MIL-HDBK-217 prediction model are causing the large variation between prediction and field reliability. The key findings of the literature research suggest that the main reason for the inaccuracy in prediction is because of the constant failure rate assumption used in MIL-HDBK-217 is usually not applicable. Secondly, even if the constant failure rate assumption is applicable, the disparity may still exist in the presence of design and quality related problems in new systems. A possible solution is to apply reliability growth testing (RGT) to new systems during the development phase in an attempt to remove these design deficiencies so that the system's reliability will grow and approach the predicted value. In view of the importance of RGT in minimizing the disparity, this thesis provides a detailed application of the AMSAA Extended Reliability Growth Models to the reliability growth analysis of a combat system. It shows how program managers can analyze test data using commercial software to estimate the system demonstrated reliability and the increased in reliability due to delayed fixes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Ko, Chun-Hung. „Systems reliability analysis of bridge superstructures“. Thesis, Queensland University of Technology, 1995.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Dermentzoudis, Marinos. „Establishment of models and data tracking for small UAV reliability“. Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Jun%5FDermentzoudis.pdf.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S. in Operations Research and M.S. in Systems Engineering)--Naval Postgraduate School, June 2004.
Thesis advisor(s): David Olwell. Includes bibliographical references (p. 217-224). Also available online.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Li, Heping. „Condition-based maintenance policies for multi-component systems considering stochastic dependences“. Thesis, Troyes, 2016. http://www.theses.fr/2016TROY0030/document.

Der volle Inhalt der Quelle
Annotation:
De nos jours, les systèmes industriels sont de plus en plus complexes tant du point de vue de leur structure logique que des diverses dépendances (dépendances économique, stochastiques et structurelles) entre leurs composants qui peuvent influencer l'optimisation de la maintenance. La Maintenance conditionnelle qui permet de gérer les activités de maintenance en fonction de l’information de surveillance a fait l’objet de beaucoup d'attention au cours des dernières années, mais les dépendances stochastiques sont rarement utilisées dans le processus de prise de décision. Par conséquent, cette thèse a pour objectif de proposer des politiques de maintenance conditionnelle tenant compte des dépendances économiques et stochastiques pour les systèmes multi-composant. En termes de dépendance économique, les politiques proposées sont conçues pour permettre de favoriser les opportunités de grouper des actions de maintenance. Une règle de décision est établie qui permet le groupement de maintenances avec des périodes d'inspection différentes. La dépendance stochastique causée par une part de dégradation commune est modélisée par copules de Lévy. Des politiques de maintenance conditionnelle sont proposées pour profiter de la dépendance stochastique.Nos travaux montrent la nécessité de tenir compte des dépendances économiques et stochastiques pour la prise de décision de maintenance. Les résultats numériques confirment l’avantage de nos politiques par rapport à d’autres politiques existant dans la littérature
Nowadays, industrial systems contain numerous components so that they become more and more complex regarding the logical structures as well as the various dependences (economic, stochastic and structural dependences) between components. The dependences between components have an impact on the maintenance optimization as well as the reliability analysis. Condition-based maintenance which enables to manage maintenance activities based on information collected through monitoring has gained a lot of attention over recent years but stochastic dependences are rarely used in the decision making process. Therefore, this thesis is devoted to propose condition-based maintenance policies which take advantage of both economic and stochastic dependences for multi-component systems. In terms of economic dependence, the proposed maintenance policies are designed to be maximally effective in providing opportunities for maintenance grouping. A decision rule is established to permit the maintenance grouping with different inspection periods. Stochastic dependence due to a common degradation part is modelled by Lévy and Nested Lévy copulas. Condition-based maintenance policies with non-periodic inspection scheme are proposed to make use of stochastic dependence. Our studies show the necessity of taking account of both economic and stochastic dependences in the maintenance decisions. Numerical experiments confirm the advantages of our maintenance policies when compared with other existing policies in the literature
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Brophy, Dennis J. O'Leary James D. „Software evaluation for developing software reliability engineering and metrics models /“. Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1999. http://handle.dtic.mil/100.2/ADA361889.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S. in Information Technology Management) Naval Postgraduate School, March 1999.
"March 1999". Thesis advisor(s): Norman F. Schneidewind, Douglas Brinkley. Includes bibliographical references (p. 59-60). Also available online.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Brophy, Dennis J., und James D. O'Leary. „Software evaluation for developing software reliability engineering and metrics models“. Thesis, Monterey, California ; Naval Postgraduate School, 1999. http://hdl.handle.net/10945/13581.

Der volle Inhalt der Quelle
Annotation:
Today's software is extremely complex, often constituting millions of lines of instructions. Programs are expected to operate smoothly on a wide variety of platforms. There are continuous attempts to try to assess what the reliability of a software package is and to predict what the reliability of software under development will be. The quantitative aspects of these assessments deal with evaluating, characterizing and predicting how well software will operate. Experience has shown that it is extremely difficult to make something as large and complex as modern software and predict with any accuracy how it is going to behave in the field. This thesis proposes to create an integrated system to predict software reliability for mission critical systems. This will be accomplished by developing a flexible DBMS to track failures and to integrate the DBMS with statistical analysis programs and software reliability prediction tools that are used to make calculations and display trend analysis. It further proposes a software metrics model for fault prediction by determining and manipulating metrics extracted from the code.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Nguyen, Kim Anh. „Développement de stratégies de maintenance prévisionnelle de systèmes multi-composants avec structure complexe“. Thesis, Troyes, 2015. http://www.theses.fr/2015TROY0027/document.

Der volle Inhalt der Quelle
Annotation:
Aujourd'hui, les systèmes industriels deviennent de plus en plus complexes. Cette complexité est due d’une part à la structure du système qui ne se résume pas à des structures classiques en fiabilité, d’autre part à la prise en compte de composants présentant des phénomènes de dégradation graduelle que des systèmes de monitoring permettent de surveiller. Ceci mène à l'objectif de cette thèse portant sur le développement des stratégies de maintenance prévisionnelle pour des systèmes multi-composants complexes. Les politiques envisagées proposent notamment des stratégies de regroupement de composants permettant de tirer des dépendances économiques identifiées. Des facteurs d'importance permettant de prendre en compte la structure du système et la dépendance économique sont développés et combinés avec les évaluations de fiabilité prévisionnelle des composants pour l’élaboration de règles de décision de regroupement. De plus, un couplage des règles de décision de maintenance et de gestion des stocks est également étudié. L’ensemble des études menées montrent l’intérêt de la prise en compte de la fiabilité prévisionnelle des composants, des dépendances économiques et de la structure complexe du système dans l'aide à la décision de maintenance et de gestion des stocks. L’avantage des stratégies développées est vérifié en les comparant à d’autres existantes dans la littérature
Today, industrial systems become more and more complex. The complexity is due partly to the structure of the system that cannot be reduced to classic structure reliability (series structures, parallel structures, series-parallel structures, etc), secondly the consideration of components with gradual degradation phenomena that can be monitored. This leads to the main purpose of this thesis on the development of predictive maintenance strategies for complex multi-component systems. The proposed policies provide maintenance grouping strategies to take advantage of the economic dependence between components. The predictive reliability of components and importance measures allowing taking into account the structure of the system and economic dependence are developed to construct the grouping decision rules. Moreover, a joint decision rule for maintenance and spare parts provisioning is also studied.All the conducted studies show the interest in the consideration of the predictive reliability of components, economic dependencies as well as complex structure of the system in maintenance decisions and spare parts provisioning. The advantage of the developed strategies is confirmed by comparing with the other existing strategies in the literature
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Ly, Cuong. „Reliability study of the Callide power station electrical system“. Thesis, Queensland University of Technology, 1997. https://eprints.qut.edu.au/36023/1/36023_Ly_1997.pdf.

Der volle Inhalt der Quelle
Annotation:
The reliable operation of the electrical system at Callide Power Station is of extreme importance to the normal everyday running of the Station. The electrical system configuration and hardware are inherently very reliable. However, in all cases the failure of components such as circuit breakers, switchboards, and transformers would directly or indirectly effect the Station's capability to generate at full load capacity and hence maximise revenue. This study has applied the principles of reliability and has utilised a reliability software package to do an analysis on the electrical system at Callide Power Station. The study analyses other possible system configurations that could increase the reliability of the Station. The study identifies priority maintenance on load points displaying high reliability indices. An analysis was done on the impact of unusual system configurations such as Boiler Feed Pump motor startups. Using the results from the study an appropriate level of maintenance was suggested for the current Callide electrical system configuration and recommendations on the replacement of some 41 SV circuit breaker tripping toggles was made.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Manning, Charles Roger 1956. „Infiltration parameters for mathematical models of furrow irrigation“. Thesis, The University of Arizona, 1993. http://hdl.handle.net/10150/278286.

Der volle Inhalt der Quelle
Annotation:
The effort to improve furrow irrigation design and management by use of mathematical models is hampered by the difficulty of obtaining infiltration parameters that adequately describe the infiltration process in furrows. This difficulty is related to the effect on infiltration of the variability of wetted width of a furrow with depth. Detailed field measurements of twelve furrow irrigations were used to develop infiltration parameters based on three different assumptions regarding the variation of wetted width with depth. These infiltration parameters were used as input into a mathematical model of furrow irrigation, SRFR. Comparison of measured advance times, water surface elevations and volume of water infiltrated with these values computed by SRFR indicates that SRFR gives consistent results based on the input parameters.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Terciyanli, Erman. „Alternative Mathematical Models For Revenue Management Problems“. Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610711/index.pdf.

Der volle Inhalt der Quelle
Annotation:
In this study, the seat inventory control problem is considered for airline networks from the perspective of a risk-averse decision maker. In the revenue management literature, it is generally assumed that the decision makers are risk-neutral. Therefore, the expected revenue is maximized without taking the variability or any other risk factor into account. On the other hand, risk-sensitive approach provides us with more information about the behavior of the revenue. The risk measure we consider in this study is the probability that revenue is less than a predetermined threshold level. In the risk-neutral cases, while the expected revenue is maximized, the probability of revenue being less than such a predetermined level might be high. We propose three mathematical models to incorporate the risk measure under consideration. The optimal allocations obtained by these models are numerically evaluated in simulation studies for example problems. Expected revenue, coefficient of variation, load factor and probability of the poor performance are the performance measures in the simulation studies. According to the results of these simulations, it shown that the proposed models can decrease the variability of the revenue considerably. In other words, the probability of revenue being less than the threshold level is decreased. Moreover, expected revenue can be increased in some scenarios by using the proposed models. The approach considered in this thesis is especially proposed for small scale airlines because risk of obtaining revenue less than the threshold level is more for this type of airlines as compared to large scale airlines.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Khosravi-Dehkordi, Iman. „Load flow feasibility under extreme contingencies“. Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=100252.

Der volle Inhalt der Quelle
Annotation:
This thesis examines the problem of load flow feasibility, in other words, the conditions under which a power network characterized by the load flow equations has a steady-state solution. In this thesis, we are particularly interested in load flow feasibility in the presence of extreme contingencies such as the outage of several transmission lines.
Denoting the load flow equations by z = f(x) where z is the vector of specified injections (the real and reactive bus demands, the specified real power bus generations and the specified bus voltage levels), the question addressed is whether there exists a real solution x to z = f( x) where x is the vector of unknown bus voltage magnitudes at load buses and unknown bus voltage phase angles at all buses but the reference bus. Attacking this problem via conventional load flow algorithms has a major drawback, principally the fact that such algorithms do not converge when the load flow injections z define or are close to defining an infeasible load flow. In such cases, lack of convergence may be due to load flow infeasibility or simply to the ill-conditioning of the load flow Jacobian matrix.
This thesis therefore makes use of the method of supporting hyperplanes to characterize the load flow feasibility region, defined as the set the injections z for which there exists a real solution x to the load flow equations. Supporting hyperplanes allow us to calculate the so-called load flow feasibility margin, which determines whether a given injection is feasible or not as well as measuring how close the injection is to the feasibility boundary. This requires solving a generalized eigenvalue problem and a corresponding optimization for the closest feasible boundary point to the given injection.
The effect of extreme network contingencies on the feasibility of a given injection is examined for two main cases: those contingencies that affect the feasibility region such as line outages and those that change the given injection itself such as an increase in VAR demand or the loss of a generator. The results show that the hyperplane method is a powerful tool for analyzing the effect of extreme contingencies on the feasibility of a power network.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Fathi, Aghdam Faranak. „Nanowire Growth Process Modeling and Reliability Models for Nanodevices“. Diss., The University of Arizona, 2016. http://hdl.handle.net/10150/612823.

Der volle Inhalt der Quelle
Annotation:
Nowadays, nanotechnology is becoming an inescapable part of everyday life. The big barrier in front of its rapid growth is our incapability of producing nanoscale materials in a reliable and cost-effective way. In fact, the current yield of nano-devices is very low (around 10 %), which makes fabrications of nano-devices very expensive and uncertain. To overcome this challenge, the first and most important step is to investigate how to control nano-structure synthesis variations. The main directions of reliability research in nanotechnology can be classified either from a material perspective or from a device perspective. The first direction focuses on restructuring materials and/or optimizing process conditions at the nano-level (nanomaterials). The other direction is linked to nano-devices and includes the creation of nano-electronic and electro-mechanical systems at nano-level architectures by taking into account the reliability of future products. In this dissertation, we have investigated two topics on both nano-materials and nano-devices. In the first research work, we have studied the optimization of one of the most important nanowire growth processes using statistical methods. Research on nanowire growth with patterned arrays of catalyst has shown that the wire-to-wire spacing is an important factor affecting the quality of resulting nanowires. To improve the process yield and the length uniformity of fabricated nanowires, it is important to reduce the resource competition between nanowires during the growth process. We have proposed a physical-statistical nanowire-interaction model considering the shadowing effect and shared substrate diffusion area to determine the optimal pitch that would ensure the minimum competition between nanowires. A sigmoid function is used in the model, and the least squares estimation method is used to estimate the model parameters. The estimated model is then used to determine the optimal spatial arrangement of catalyst arrays. This work is an early attempt that uses a physical-statistical modeling approach to studying selective nanowire growth for the improvement of process yield. In the second research work, the reliability of nano-dielectrics is investigated. As electronic devices get smaller, reliability issues pose new challenges due to unknown underlying physics of failure (i.e., failure mechanisms and modes). This necessitates new reliability analysis approaches related to nano-scale devices. One of the most important nano-devices is the transistor that is subject to various failure mechanisms. Dielectric breakdown is known to be the most critical one and has become a major barrier for reliable circuit design in nano-scale. Due to the need for aggressive downscaling of transistors, dielectric films are being made extremely thin, and this has led to adopting high permittivity (k) dielectrics as an alternative to widely used SiO₂ in recent years. Since most time-dependent dielectric breakdown test data on bilayer stacks show significant deviations from a Weibull trend, we have proposed two new approaches to modeling the time to breakdown of bi-layer high-k dielectrics. In the first approach, we have used a marked space-time self-exciting point process to model the defect generation rate. A simulation algorithm is used to generate defects within the dielectric space, and an optimization algorithm is employed to minimize the Kullback-Leibler divergence between the empirical distribution obtained from the real data and the one based on the simulated data to find the best parameter values and to predict the total time to failure. The novelty of the presented approach lies in using a conditional intensity for trap generation in dielectric that is a function of time, space and size of the previous defects. In addition, in the second approach, a k-out-of-n system framework is proposed to estimate the total failure time after the generation of more than one soft breakdown.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Cho, Young Jin. „Effects of decomposition level on the intrarater reliability of multiattribute alternative evaluation“. Diss., This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-06062008-171537/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Lewis, Doris Trinh 1957. „A ROBUST METHOD FOR USING MAINTAINABILITY COST MODELS (RELIABILITY, OPTIMIZATION, SENSITIVITY, UNCERTAINTY)“. Thesis, The University of Arizona, 1986. http://hdl.handle.net/10150/292098.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Garbuno, Inigo A. „Stochastic methods for emulation, calibration and reliability analysis of engineering models“. Thesis, University of Liverpool, 2018. http://livrepository.liverpool.ac.uk/3026757/.

Der volle Inhalt der Quelle
Annotation:
This dissertation examines the use of non-parametric Bayesian methods and advanced Monte Carlo algorithms for the emulation and reliability analysis of complex engineering computations. Firstly, the problem lies in the reduction of the computational cost of such models and the generation of posterior samples for the Gaussian Process' (GP) hyperparameters. In a GP, as the flexibility of the mechanism to induce correlations among training points increases, the number of hyperparameters increases as well. This leads to multimodal posterior distributions. Typical variants of MCMC samplers are not designed to overcome multimodality. Maximum posterior estimates of hyperparameters, on the other hand, do not guarantee a global optimiser. This presents a challenge when emulating expensive simulators in light of small data. Thus, new MCMC algorithms are presented which allow the use of full Bayesian emulators by sampling from their respective multimodal posteriors. Secondly, in order for these complex models to be reliable, they need to be robustly calibrated to experimental data. History matching solves the calibration problem by discarding regions of input parameters space. This allows one to determine which configurations are likely to replicate the observed data. In particular, the GP surrogate model's probabilistic statements are exploited, and the data assimilation process is improved. Thirdly, as sampling- based methods are increasingly being used in engineering, variants of sampling algorithms in other engineering tasks are studied, that is reliability-based methods. Several new algorithms to solve these three fundamental problems are proposed, developed and tested in both illustrative examples and industrial-scale models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Viriththamulla, Gamage Indrajith. „Mathematical programming models and heuristics for standard modular design problem“. Diss., The University of Arizona, 1991. http://hdl.handle.net/10150/185431.

Der volle Inhalt der Quelle
Annotation:
In this dissertation, we investigate the problem of designing standard modules which can be used in a wide variety of products. The basic problem is: given a set of parts and products, and a list of the number of each part required in each product, how do we group parts into modules and modules into products to minimize costs and satisfy requirements. The design of computers, electronic equipments, tool kits, emergency vehicles and standard military groupings are among the potential applications for this work. Several mathematical programming models for modular design are developed and the advantages and weaknesses of each model have been analyzed. We demonstrate the difficulties, due to nonconvexity, of applying global optimization methods to solve these mathematical models. We develop necessary and sufficient conditions for satisfying requirements exactly, and use these results in several heuristic methods. Three heuristic structures; decomposition, sequential local search, and approximation, are considered. The decomposition approach extends previous work on modular design problems. Sequential local search uses a standard local solution routine (MINOS) and sequentially adds cuts on the objective function to the original model. The approximation approach uses a "least squares" relaxation to find upper and lower bounds on the objective of the optimal solution. Computational results are presented for all three approaches and suggest that the approximation approach performs better than the others (with respect to speed and solution quality). We conclude the dissertation with a stochastic variation of the modular design problem and a solution heuristic. We discuss an approximation model to the continuous formulation, which is a geometric programming model. We develop a heuristic to solve this problem using monotonicity properties of the functions. Computational results are given and compared with an upper bound.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Akileh, Aiman R. „Elastic-plastic analysis of axisymmetrically loaded isotropic circular and annular plates undergoing large deflections“. PDXScholar, 1986. https://pdxscholar.library.pdx.edu/open_access_etds/3559.

Der volle Inhalt der Quelle
Annotation:
The concept of load analogy is used in the elastic and elastic-plastic analysis of isotropic circular and annular plates undergoing moderately large deflection. The effects of the nonlinear terms of lateral displacement and the plastic strains are considered as additional fictitious lateral loads, edge moments, and in-plane forces acting on the plate. The solution of an elastic or elastic-plastic Von Karman type plate is hence reduced to a set of two equivalent elastic plate problems with small displacements, namely, a plane problem in elasticity and a linear elastic plate bending problem. The method of finite element is employed to solve the plane stress problem. The large deflection solutions are then obtained by utilizing the solutions of the linear bending problems through an iterative numerical scheme. The flow theory of plasticity incorporating a Von Mises layer yield criterion and the Prandtl-Reuss associated flow rule for strain hardening materials is employed in this approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Bolduc, Laura Christine. „Probabilistic models and reliability analysis of scour depth around bridge piers“. [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1764.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie