Дисертації з теми "Physical-statistical model of reliability"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Physical-statistical model of reliability.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-40 дисертацій для дослідження на тему "Physical-statistical model of reliability".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Kim, Seong W. "Bayesian model selection using intrinsic priors for commonly used models in reliability and survival analysis /." free to MU campus, to others for purchase, 1997. http://wwwlib.umi.com/cr/mo/fullcit?p9841159.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Brownstein, Naomi. "Estimation and the Stress-Strength Model." Honors in the Major Thesis, University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/1160.

Повний текст джерела
Анотація:
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf.edu/Systems/DigitalInitiatives/DigitalCollections/InternetDistributionConsentAgreementForm.pdf You may also contact the project coordinator, Kerri Bottorff, at kerri.bottorff@ucf.edu for more information.
Bachelors
Sciences
Mathematics
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Bakouros, Y. L. "Offshore pipeline reliability prediction : An assessment of the breakdown characteristics of offshore pipelines and the development of a statistical technique to improve their reliability prediction with particular reference." Thesis, University of Bradford, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.233657.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Moberg, Pontus, and Filip Svensson. "Cost Optimisation through Statistical Quality Control : A case study on the plastic industry." Thesis, Blekinge Tekniska Högskola, Institutionen för industriell ekonomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21922.

Повний текст джерела
Анотація:
Background. Shewhart was the first to describe the possibilities that come with having a statistically robust process in 1924. Since his discovery, the importance of a robust process became more apparent and together with the consequences of an unstable process. A firm with a manufacturing process that is out of statistical control tends to waste money, increase risks, and provide an uncertain quality to its customers. The framework of Statistical Quality Control has been developed since its founding, and today it is a well-established tool used in several industries with successful results. When it was first thought of, complicated calculations had to be performed and was performed manually. With digitalisation, the quality tools can be used in real-time, providing high-precision accuracy on the quality of the product. Despite this, not all firms nor industries have started using these tools as of today.    The costs that occur in relation to the quality, either as a consequence of maintaining good quality or that arises from poor quality, are called Cost of Quality. These are often displayed through one of several available cost models. In this thesis, we have created a cost model that was heavily inspired by the P-A-F model. Several earlier studies have shown noticeable results by using SPC, COQ or a combination of them both.     Objectives. The objective of this study is to determine if cost optimisation could be utilised through SQC implementation. The cost optimisation is a consequence of an unstable process and the new way of thinking that comes with SQC. Further, it aims to explore the relationship between cost optimisation and SQC. Adding a layer of complexity and understanding to the spread of Statistical Quality Tools and their importance for several industries. This will contribute to tightening the bonds of production economics, statistical tools and quality management even further.   Methods. This study made use of two closely related methodologies, combining SPC with Cost of Quality. The combination of these two hoped to demonstrate a possible cost reduction through stabilising the process. The cost reduction was displayed using an optimisation model based on the P-A-F (Prevention, Appraisal, External Failure and Internal Failure) and further developed by adding a fifth parameter for optimising materials (OM). Regarding whether the process was in control or not, we focused on the thickness of the PVC floor, 1008 data points over three weeks were retrieved from the production line, and by analysing these, a conclusion on whether the process was in control could be drawn.    Results. Firstly, none of the three examined weeks were found to be in statistical control, and therefore, nor were the total sample. Through the assumption of the firm achieving 100% statistical control over their production process, a possible cost reduction of 874 416 SEK yearly was found.    Conclusions. This study has proven that through focusing on stabilising the production process and achieving control over their costs related to quality, possible significant yearly savings can be achieved. Furthermore, an annual cost reduction was found by optimising the usage of materials by relocating the ensuring of thickness variation from post-production to during the production.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Matthews, Robert. "The Reliability and Validity of a Simulated Airway Model that Quantifies Physical Forces Exerted During Endotracheal Intubation in a Clinically Demanding Scenario." VCU Scholars Compass, 2011. http://scholarscompass.vcu.edu/etd/2466.

Повний текст джерела
Анотація:
The main purpose of this research was the development of an experimental model that allows for the assessment of pressure and thereby, the forces associated with interventions related to airway management. The foundation of this research was to develop, and assess the validity and reliability, of a method of quantifying the force experienced by a patient during airway management. Following IRB approval and the development of a unique simulation model that employs transducers situated in key anatomical locations to determine forces, a multivariate profile analysis with covariate of experience using a MANCOVA approach was conducted. The statistical design consisted of 102 subjects testing the dependent measure of pressure for the following techniques: Fiberoptic intubation, the Fastrach™ LMA, the # 3 C-Mac video laryngoscope, and the Trachlight®. Independent variables analyzed were practitioner types: emergency medicine physicians, certified registered nurse anesthetists, and anesthesiologists, all tested over five locations: Chicago, Las Vegas, Atlanta, Seattle, and Boston, with a co-variable of experience. Analysis demonstrated no difference in force attributed to the location, the airway provider or their interactions. This was contrasted by the finding that 81% of the variance in pressure scores was due to differences in airway techniques. The mannequin was also able to discern a subpopulation within techniques which lends to its validity. The mannequin preformed consistently regarding reproducible findings following the setup and dismantling over time and locations. This would seem to begin to form the bases of a valid and reliable tool for this and future research.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

林達明 and Daming Lin. "Reliability growth models and reliability acceptance sampling plans from a Bayesian viewpoint." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B3123429X.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Carstens, Wiehahn Alwyn. "Regression analysis of caterpillar 793D haul truck engine failure data and through-life diagnostic information using the proportional hazards model." Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/20333.

Повний текст джерела
Анотація:
Thesis (MScEng)--Stellenbosch University, 2012.
ENGLISH ABSTRACT: Physical Asset Management (PAM) is becoming a greater concern for companies in industry today. The widely accepted British Standards Institutes’ specification for optimized management of physical assets and infrastructure is PAS55. According to PAS55, PAM is the “systematic and co-ordinated activities and practices through which an organization optimally manages its physical assets, and their associated performance, risks and expenditures over their life cycle for the purpose of achieving its organizational strategic plan”. One key performance area of PAM is Asset Care Plans (ACP). These plans are maintenance strategies which improve or ensure acceptable asset reliability and performance during its useful life. Maintenance strategies such as Condition Based Maintenance (CBM) acts upon Condition Monitoring (CM) data, disregarding the previous failure histories of an asset. Other maintenance strategies, such as Usage Based Maintenance (UBM), is based on previous failure histories, and does not consider CM data. Regression models make use of both CM data and previous failure histories to develop a model which represents the underlying failure behaviour of the asset under study. These models can be of high value in ACP development due to the fact that Residual Useful Life (RUL) can be estimated and/or the long term life cycle cost can be optimized. The objective of this thesis was to model historical failure data and CM data well enough so that RUL or optimized preventive maintenance instant estimations can be made. These estimates were used in decision models to develop maintenance schedules, i.e. ACPs. Several regression models were evaluated to determine the most suitable model to achieve the objectives of this thesis. The model found to be most suitable for this research project was the Proportional Hazards Model (PHM). A comprehensive investigation on the PHM was undertaken focussing on the mathematics and the practical implementation thereof. Data obtained from the South African mining industry was modelled with the Weibull PHM. It was found that the developed model produced estimates which were accurate representations of reality. These findings provide an exciting basis for the development of futureWeibull PHMs that could result in huge maintenance cost savings and reduced failure occurrences.
AFRIKAANSE OPSOMMING: Fisiese Bate Bestuur (FBB) is besig om ’n groter bekommernis vir maatskappye in die bedryf te word. Die Britse Standaarde Instituut se spesifikasie vir optimale bestuur van fisiese bates en infrastruktuur is PAS55. Volgens PAS55 is FBB die “sistematiese en gekoördineerde aktiwiteite en praktyke wat deur ’n organisasie optimaal sy fisiese bates, hul verwante prestasie, risiko’s en uitgawes vir die doel van die bereiking van sy organisatoriese strategiese plan beheer oor hul volle lewensiklus te bestuur”. Een Sleutel Fokus Area (SFA) van FBB is Bate Versorgings Plan (BVP) ontwikkeling. Hierdie is onderhouds strategieë wat bate betroubaarheid verbeter of verseker tydens die volle bruikbare lewe van die bate. Een onderhoud strategie is Toestands Gebasseeerde Onderhoud (TGO) wat besluite baseer op Toestand Monitering (TM) informasie maar neem nie die vorige falingsgeskiedenis van die bate in ag nie. Ander onderhoud strategieë soos Gebruik Gebasseerde Onderhoud (GGO) is gebaseer op historiese falingsdata maar neem nie TM inligting in ag nie. Regressiemodelle neem beide TM data en historiese falings geskiedenis data in ag ten einde die onderliggende falings gedrag van die gegewe bate te verteenwoordig. Hierdie modelle kan baie nuttig wees vir BVP ontwikkeling te danke aan die feit dat Bruikbare Oorblywende Lewe (BOL) geskat kan word en/of die langtermyn lewenssilus koste geoptimeer kan word. Die doelwit van hierdie tesis was om historiese falingsdata en TT data goed genoeg te modelleer sodat BOL of optimale langtermyn lewensiklus kostes bepaal kan word om opgeneem te word in BVP ontwikkeling. Hierdie bepalings word dan gebruik in besluitnemings modelle wat gebruik kan word om onderhoud skedules op te stel, d.w.s. om ’n BVP te ontwikkel. Verskeie regressiemodelle was geëvalueer om die regte model te vind waarmee die doel van hierdie tesis te bereik kan word. Die mees geskikte model vir die navorsingsprojek was die Proporsionele Gevaarkoers Model (PGM). ’n Omvattende ondersoek oor die PGM is onderneem wat fokus op die wiskunde en die praktiese implementering daarvan. Data is van die Suid-Afrikaanse mynbedryf verkry en is gemodelleer met behulp van die Weibull PGM. Dit was bevind dat die ontwikkelde model resultate geproduseer het wat ’n akkurate verteenwoordinging van realiteit is. Hierdie bevindinge bied ’n opwindende basis vir die ontwikkeling van toekomstige Weibull Proporsionele Gevaarkoers Modelle wat kan lei tot groot onderhoudskoste besparings en minder onverwagte falings.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Lin, Daming. "Reliability growth models and reliability acceptance sampling plans from a Bayesian viewpoint /." Hong Kong : University of Hong Kong, 1995. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13999618.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Freeman, Laura J. "Statistical Methods for Reliability Data from Designed Experiments." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/37729.

Повний текст джерела
Анотація:
Product reliability is an important characteristic for all manufacturers, engineers and consumers. Industrial statisticians have been planning experiments for years to improve product quality and reliability. However, rarely do experts in the field of reliability have expertise in design of experiments (DOE) and the implications that experimental protocol have on data analysis. Additionally, statisticians who focus on DOE rarely work with reliability data. As a result, analysis methods for lifetime data for experimental designs that are more complex than a completely randomized design are extremely limited. This dissertation provides two new analysis methods for reliability data from life tests. We focus on data from a sub-sampling experimental design. The new analysis methods are illustrated on a popular reliability data set, which contains sub-sampling. Monte Carlo simulation studies evaluate the capabilities of the new modeling methods. Additionally, Monte Carlo simulation studies highlight the principles of experimental design in a reliability context. The dissertation provides multiple methods for statistical inference for the new analysis methods. Finally, implications for the reliability field are discussed, especially in future applications of the new analysis methods.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Jiang, Siyuan. "Mixed Weibull distributions in reliability engineering: Statistical models for the lifetime of units with multiple modes of failure." Diss., The University of Arizona, 1991. http://hdl.handle.net/10150/185481.

Повний текст джерела
Анотація:
The finite mixed Weibull distribution is an appropriate distribution in modeling the lifetime of the units having more than one possible failure cause. Due to the lack of a systematic statistical procedure of fitting the distribution to a data set, it has not been widely used in lifetime data analyses. Many areas on this subject have been studied in this research. The following are the findings and contributions. Through a change of variable, 5 parameters in a two Weibull mixture can be reduced to 3. A parameter'vector (p₁, η, β) defines a family of two-Weibull mixtures which have common characteristics. Numerous probability plots are investigated on Weibull probability paper (WPP). For a given p₁ the η-β plane is partitioned into seven regions which are labeled by A through F and S. The Region S represents the two Weibull mixtures whose cdf curves are very close to a straight line. The Regions A through F represent six typical shapes of the cdf curves on WPP, respectively. The two-Weibull mixtures in one region have similar characteristics. Three important features of the two-Weibull mixture with well separated subpopulations are proved. Two existing methods for the graphical estimation of the parameters are discussed, and one is recommended over the other. The EM algorithm is successfully applied to solve the MLE for mixed Weibull distributions when m, the number of subpopulations in a mixture is known. The algorithms for complete, censored, grouped and suspended samples with non-postmortem and postmortem failures are developed accordingly. The developed algorithms are powerful, efficient and they are insensitive to the initial guesses. Extensive Monte Carlo simulations are performed. The distributions of the MLE of the parameters and of the reliability of a two Weibull mixture are studied. The MLEs of the parameters are sensitive to the degree of separation of the two subpopulation pdfs, but the MLE of the reliability is not. The generalized likelihood ratio (GLR) test is used to determine m. Under H₀: m=1 and H₁: m=m₁>1, ζ, the GLR is independent of the parameters in the distribution of H₀. The distributions of ζ or -21n(ζ) with n=50, 100 and 150 are obtained through Monte Carlo simulations. Compared with the chi-square distribution, they fall in between x²(4) and x²(6), and they are very close to x²(5). A FORTRAN computer program is developed to conduct simulation of the GLR test for 1 ≤ m₀ < m₁ ≤ 5.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Romero, Alfredo A. "Statistical Adequacy and Reliability of Inference in Regression-like Models." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/27782.

Повний текст джерела
Анотація:
Using theoretical relations as a source of econometric specifications might lead a researcher to models that do not adequately capture the statistical regularities in the data and do not faithfully represent the phenomenon of interest. In addition, the researcher is unable to disentangle the statistical and substantive sources of error and thus incapable of using the statistical evidence to assess whether the theory, and not the statistical model, is wrong. The Probabilistic Reduction Approach puts forward a modeling strategy in which theory can confront data without compromising the credibility of either one of them. This approach explicitly derives testable assumptions that, along with the standardized residuals, help the researcher assess the precision and reliability of statistical models via misspecification testing. It is argued that only when the statistical source of error is ruled out can the researcher reconcile the theory and the data and establish the theoretical and/or external validity of econometric models. Through the approach, we are able to derive the properties of Beta regression-like models, appropriate when the researcher deals with rates and proportions or any other random variable with finite support; and of Lognormal models, appropriate when the researcher deals with nonnegative data, and specially important of the estimation of demand elasticities.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Михайлів, Ярослав Андрійович. "Аналіз достовірності вихідної інформації, розрахункових моделей та методів оцінки надійності розподільних мереж". Master's thesis, КПІ ім. Ігоря Сікорського, 2019. https://ela.kpi.ua/handle/123456789/35759.

Повний текст джерела
Анотація:
Актуальність теми. Проблема надійного забезпечення споживачів електричною енергією є однією з найважливіших при вирішенні задач проектування та експлуатації систем електропостачання (СЕП) міст, промислових підприємств та окремих об’єктів. Вимоги з питань надійного електропостачання визначаються відповідними нормативними документами і мають бути беззаперечно враховані і виконані. І споживачам, і підприємствам електричних мереж завдається істотна шкода внаслідок вимушених перерв живлення. На будь-якому рівні інфраструктури електроенергетики забезпечення надійності електропостачання споживачів завжди було важливою науково-технічною проблемою, дослідженню і вирішенню якої присвячено численні роботи вчених та науководослідних і проектних організацій (КПІ, ТГУ, МЕІ та ін.). Основні напрямки досліджень: отримання, систематизація та опрацювання статистичної інформації, оцінювання її достовірності; розроблення адекватних розрахункових моделей з оцінювання та оптимізації надійності елементів, схем та системи електропостачання вцілому; визначення ефективних системних показників надійності та методів їх розрахунку і прийняття оптимальних рішень; економічні показники втрат споживачів та електропостачальних організацій від недоотримання та недовідпуску електричної енергії. Загальний аналіз роботи електричних мереж на даний момент показує, що технічний їх стан є незадовільним, спостерігається старіння обладнання, що прогресує, та, відповідно, зниження показників надійності елементів та систем електропостачання. Більш того, постійне ускладнення структури та поява нових елементів мереж потребує розвитку теорії вирішення задач оцінювання і підвищення надійності енергозабезпечення. Відповідно, з’являється необхідність розробки методології прийняття рішень на стадіях будівництва, реконструкції та експлуатації розподільних електричних мереж (РЕМ). При оцінюванні надійності систем електрозабезпечення споживачів зазвичай розглядають такі показники, як ймовірність випадкової події перерви в електропостачанні, випадкова величина недовідпуску електричної енергії споживачам (що мала місце за результатами подій, що відбулися, або прогнозована, розрахункова), реальні або прогнозовані збитки споживачів або електропостачальної організації. Проведений аналіз свідчить про те, що показники пошкоджуваності елементів розподільних мереж та значення збитків, що несуть споживачі, практично завжди залежать від конкретних умов. Більш того, навіть для одних і тих же умов експлуатації мають місце коливання інтегрованих за рік показників пошкоджуваності елементів мереж в рази, тобто більш ніж на 100 відсотків. Це свідчить про те, що в умовах кожної системи необхідно проводити аналіз даних аварійної статистики з визначенням реальних факторів впливу на вихідні розрахункові показники (довжин ліній, схемних рішень, кількості вузлів та інші) Так само має розглядатись і враховуватись при визначені збитків (що мали місце або прогнозованих) їх суттєва залежність від сезону, часу доби, а також в значній мірі – від того, скільки часу тривала перерва в живленні споживача. Аналіз статистичної інформації свідчить про суттєву нестабільність, нестаціонарність показників, що використовуються при формуванні розрахункових моделей, оцінюванні надійності схем. Системний підхід до питання розроблення більш ефективних моделей та методів оцінки надійності розподільних мереж є як ніколи актуальним. Мета та завдання дослідження. Метою роботи є формування методології оцінювання достовірності вихідних параметрів надійності РЕМ, що визначаються по обмеженим обсягам даних аварійної статистики, та впливу прийнятих розрахункових моделей на результати обчислення показників надійності мережі. Відповідно до мети, в роботі вирішувались наступні завдання:  аналіз інформації щодо функціонування розподільних електричних мереж;  оцінка достовірності вихідних показників надійності, визначення і урахування впливових факторів, законів розподілу випадкових величин;  вибір та порівняння розрахункових моделей оцінювання надійності розподільних електричних мереж напругою 6-10 кВ на підставі опрацювання отриманих даних аварійної статистики;  послідовність реалізації системного підходу до аналізу статистичної інформації та здійсненню оцінювання надійності електропостачання. Об’єкт дослідження – розподільні мережі систем електропостачання міст. Предмет дослідження - математичні моделі та методи оцінки надійності систем електропостачання, з урахуванням особливості умов експлуатації та обсягів отримуваної вихідної інформації. Методи дослідження. Основу виконаних досліджень склали такі методи: – нелінійне програмування – метод дискретного покоординатного спуску для прийняття рішень з оптимізації точок розмикання мережі; – теорія ймовірності – використовується для оцінки впливу похибок вихідної інформації на точність визначення втрат потужності і значення ймовірного недовідпуску електричної енергії споживачам при розрахунках показників надійності; – математична статистика – для побудови гістограм розподілу згідно даних аварійної статистики, а також визначення законів розподілу та їх параметрів; для опису кривих, що показують залежність можливої похибки розрахунку значень від об’єму статистичних даних що стосуються показників надійності; – метод статистичних випробувань (Монте-Карло) – для визначення впливу похибок вихідної інформації на прийняття рішень за умови мінімізації недовідпуску електричної енергії споживачам. Елементи наукової новизни одержаних результатів. 1. Реалізовано комплексний підхід при вирішенні питань оцінки похибок вихідної інформації та їх впливу на розрахункові моделі, а саме на достовірність. 2. Запропонована методологія оцінювання впливу достовірності вихідної інформації на значення розрахункових показників надійності розподільних мереж при використанні заходів та методів уточнення показників, які визначаються. 3. Проведено згладжування статистичних розподілів даних аварійної статистики та порівняльний аналіз розрахункових моделей, що враховують індивідуальні фактори. 4. Оцінено вплив прийнятих розрахункових моделей на результати оптимізації режимів розподільних мереж виходячи з мінімізації втрат потужності і врахування надійності забезпечення споживачів електричною енергією. Практичне значення одержаних результатів. У магістерській дисертації отримано наукові результати, що мають цінність для підприємств електричних мереж у питаннях збору, систематизації інформації для її подальшої обробки для отримання параметрів розрахункових моделей. Завдяки цьому значно підвищується достовірність вихідної інформації, розрахункових моделей, а також безпосередньо розрахунків показників надійності розподільних мереж.
Actuality of theme. The problem of reliable supply of electricity to consumers is one of the most important in solving the problems of designing and operating power systems (EPS) of cities, industrial enterprises and individual objects. Requirements for reliable electricity supply are defined by the relevant regulatory documents and must be clearly considered and implemented. Both consumers and businesses are seriously harmed by forced power outages. At any level of electricity infrastructure, ensuring the reliability of electricity supply to consumers has always been an important scientific and technical problem, the research and solution of which is devoted to the numerous works of scientists and research and design organizations (KPI, TSU, MEI, etc.). Main directions of research: obtaining, systematization and processing of statistical information, evaluation of its reliability; development of adequate calculation models for evaluation and optimization of reliability of elements, circuits and power supply system as a whole; determination of effective systemic reliability indicators and methods of their calculation and making optimal decisions; economic indicators of losses of consumers and power supply organizations from under-receipt and under-release of electricity. General analysis of the operation of electrical networks at the moment shows that their technical condition is unsatisfactory, there is an aging equipment, progressing, and, consequently, a decrease in the reliability of the elements and power systems. Moreover, the constant complexity of the structure and the emergence of new network elements requires the development of a theory of solving the problems of estimating and improving the reliability of energy supply. Accordingly, there is a need to develop a decision-making methodology at the stages of construction, reconstruction and operation of distribution grids (DG). In assessing the reliability of consumer electricity systems, such indicators as the probability of an accidental event of a power outage, the accidental magnitude of unavailability of electricity to consumers (which occurred as a result of events that have occurred or are predicted, calculated) are usually considered, real or projected losses of consumers or electricity supply organization. The analysis shows that the damage rates of the elements of distribution networks and the value of losses borne by consumers almost always depend on the specific conditions. Moreover, even for the same operating conditions, fluctuations in the integrity indicators of network elements are occasionally fluctuated, by more than 100 percent. This indicates that in the conditions of each system it is necessary to analyze the data of the accident statistics with the determination of the real factors of influence on the original design indicators (lengths of lines, circuit decisions, number of nodes, etc.) It should also be considered and taken into account when determining the losses (which occurred or projected) their essential dependence on the season, time of day, and also - to a large extent - on how long the break in the power supply of the consumer. The analysis of statistical information indicates a significant instability, non-stationarity of indicators used in the formation of calculation models, evaluation of the reliability of schemes. The systematic approach to developing more efficient models and methods of assessing the reliability of distribution networks is more relevant than ever. The purpose and tasks of the study. The purpose of the work is to develop a methodology for evaluating the reliability of the original parameters of the reliability of the SEM, which are determined by the limited amount of data of the accident statistics, and the influence of the adopted calculation models on the results of calculating the network reliability indicators. Research objectives:  analysis of information on the functioning of distribution electric networks;  assessment of reliability of baseline reliability indicators, determination and consideration of influential factors, laws of distribution of random variables;  selection and comparison of calculation models of reliability estimation of distribution electric networks with voltage of 6-10 kV on the basis of processing of received data of emergency statistics;  sequence of implementation of the systematic approach to statistical analysis information and assessing the reliability of electricity supply. Object of research - Distribution networks of power supply systems of cities. Subject of research - Mathematical models, methods for assessing the reliability of power supply systems, taking into account the peculiarities of operating conditions and the amount of received information. Research methods. The basis of the performed research was the following methods:  nonlinear programming - a method of discrete coordinate descent for making decisions on the optimization of network breakpoints;  probability theory - is used to estimate the effect of initial information errors on the accuracy of determining power losses and the value of the probable non-release of electricity to consumers when calculating reliability indicators;  mathematical statistics - for plotting distribution histograms according to emergency statistics, as well as determining distribution laws and their parameters; to describe the curves showing the dependence of a possible error in the calculation of values on the volume of statistics relating to reliability indicators;  statistical test method (Monte Carlo) - to determine the impact of errors of initial information on decision making, while minimizing the lack of electricity to consumers. Elements of scientific novelty of the obtained results. 1. A comprehensive approach was implemented in addressing the issues of estimation of source information errors and their impact on the calculation models, namely reliability. 2. A methodology for estimating the impact of the reliability of the source information on the value of the calculated reliability indicators of the distribution networks when using measures and methods of refining the indicators that are determined is proposed. 3. Smoothing of statistical distributions of data of emergency statistics and comparative analysis of calculation models taking into account individual factors is carried out. 4. The influence of the adopted calculation models on the results of the optimization of the modes of distribution networks is estimated, based on the minimization of power losses and taking into account the reliability of providing consumers with electricity. The practical value of the results. In the master's thesis the scientific results are obtained that are of value for the enterprises of electric networks in the issues of collection, systematization of information for its further processing in order to obtain the parameters of the calculation models. This significantly increases the reliability of the source information, the calculation models, as well as directly calculating the reliability of distribution networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

McKinnon, Mika. "Landslide runout: statistical analysis of physical characteristics and model parameters." Thesis, University of British Columbia, 2010. http://hdl.handle.net/2429/25835.

Повний текст джерела
Анотація:
Landslides are treacherous, but risk management actions based on improved prediction of landslide runout can reduce casualties and damage. Forty rapid flow-like landslides of variable volume, entrainment, and composition are used to develop a volume-runout regression, which is compared to those established in previous research. The cases are analyzed to identify the most critical characteristics observable prior to failure which differentiate between events of high and low mobility. Mitigating long-runout flow-like landslides requires accurate hazard mapping, a task best accomplished through runout modelling. Current practice requires back-analyzing a set of cases consistent in scope with the target event, then applying the same rheology and parameters to forward modelling. This thesis determines which aspects of scope are most important to prioritize when selecting similar cases, as volume, movement type, morphology, and material have a more substantial influence on mobility than other physical characteristics. Statistical analysis of the performance of frictional and Voellmy rheologies over a range of parameters for the forty case studies provides the expected mean normalized runout and associated standard deviation, and recommendations for parameters to use in initial forward modelling of prospective events.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Мазурок, Наталия Степановна. "Физико-статистический метод определения надежности изделий твердотельной электроники". Doctoral thesis, Киев, 2013. https://ela.kpi.ua/handle/123456789/6457.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Bilisoly, Roger. "Estimating mesoscale ocean currents from drifter trajectories using a spatiotemporal Bayesian statistical model /." The Ohio State University, 1998. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487949836205445.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Suzuki, Satoshi. "The Development of Embedded DRAM Statistical Quality Models at Test and Use Conditions." PDXScholar, 2010. https://pdxscholar.library.pdx.edu/open_access_etds/341.

Повний текст джерела
Анотація:
Today, the use of embedded Dynamic Random Access Memory (eDRAM) is increasing in our electronics that require large memories, such as gaming consoles and computer network routers. Unlike external DRAMs, eDRAMs are embedded inside ASICs for faster read and write operations. Until recently, eDRAMs required high manufacturing cost. Present process technology developments enabled the manufacturing of eDRAM at competitive costs. Unlike SRAM, eDRAM exhibits retention time bit fails from defects and capacitor leakage current. This retention time fail causes memory bits to lose stored values before refresh. Also, a small portion of the memory bits are known to fail at a random retention time. At test conditions, more stringent than use conditions, if all possible retention time fail bits are detected and replaced, there will be no additional fail bits during use. However, detecting all the retention time fails requires long time and also rejects bits that do not fail at the use condition. This research seeks to maximize the detection of eDRAM fail bits during test by determining effective test conditions and model the failure rate of eDRAM retention time during use conditions.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Rydén, Patrik. "Statistical analysis and simulation methods related to load-sharing models." Doctoral thesis, Umeå universitet, Matematisk statistik, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-46772.

Повний текст джерела
Анотація:
We consider the problem of estimating the reliability of bundles constructed of several fibres, given a particular kind of censored data. The bundles consist of several fibres which have their own independent identically dis-tributed failure stresses (i.e.the forces that destroy the fibres). The force applied to a bundle is distributed between the fibres in the bundle, accord-ing to a load-sharing model. A bundle with these properties is an example of a load-sharing system. Ropes constructed of twisted threads, compos-ite materials constructed of parallel carbon fibres, and suspension cables constructed of steel wires are all examples of load-sharing systems. In par-ticular, we consider bundles where load-sharing is described by either the Equal load-sharing model or the more general Local load-sharing model. In order to estimate the cumulative distribution function of failure stresses of bundles, we need some observed data. This data is obtained either by testing bundles or by testing individual fibres. In this thesis, we develop several theoretical testing methods for both fibres and bundles, and related methods of statistical inference. Non-parametric and parametric estimators of the cumulative distribu-tion functions of failure stresses of fibres and bundles are obtained from different kinds of observed data. It is proved that most of these estimators are consistent, and that some are strongly consistent estimators. We show that resampling, in this case random sampling with replacement from sta-tistically independent portions of data, can be used to assess the accuracy of these estimators. Several numerical examples illustrate the behavior of the obtained estimators. These examples suggest that the obtained estimators usually perform well when the number of observations is moderate.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Wang, Ni. "Statistical Learning in Logistics and Manufacturing Systems." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11457.

Повний текст джерела
Анотація:
This thesis focuses on the developing of statistical methodology in reliability and quality engineering, and to assist the decision-makings at enterprise level, process level, and product level. In Chapter II, we propose a multi-level statistical modeling strategy to characterize data from spatial logistics systems. The model can support business decisions at different levels. The information available from higher hierarchies is incorporated into the multi-level model as constraint functions for lower hierarchies. The key contributions include proposing the top-down multi-level spatial models which improve the estimation accuracy at lower levels; applying the spatial smoothing techniques to solve facility location problems in logistics. In Chapter III, we propose methods for modeling system service reliability in a supply chain, which may be disrupted by uncertain contingent events. This chapter applies an approximation technique for developing first-cut reliability analysis models. The approximation relies on multi-level spatial models to characterize patterns of store locations and demands. The key contributions in this chapter are to bring statistical spatial modeling techniques to approximate store location and demand data, and to build system reliability models entertaining various scenarios of DC location designs and DC capacity constraints. Chapter IV investigates the power law process, which has proved to be a useful tool in characterizing the failure process of repairable systems. This chapter presents a procedure for detecting and estimating a mixture of conforming and nonconforming systems. The key contributions in this chapter are to investigate the property of parameter estimation in mixture repair processes, and to propose an effective way to screen out nonconforming products. The key contributions in Chapter V are to propose a new method to analyze heavily censored accelerated life testing data, and to study the asymptotic properties. This approach flexibly and rigorously incorporates distribution assumptions and regression structures into estimating equations in a nonparametric estimation framework. Derivations of asymptotic properties of the proposed method provide an opportunity to compare its estimation quality to commonly used parametric MLE methods in the situation of mis-specified regression models.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Masiulaitytė, Inga. "Regression and degradation models in reliability theory and survival analysis." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20100527_134956-15325.

Повний текст джерела
Анотація:
In doctoral thesis redundant systems and degradation models are considered. To ensure high reliability of important elements of the system, the stand-by units can be used. These units are commuted and operate instead of the main failed unit. The stand-by units can function in the different conditions: “hot”, “cold” or “warm” reserving. In the thesis systems with “warm” stand-by units are analyzed. Hypotheses of smooth commuting are formulated and goodness-of-fit tests for these hypotheses are constructed. Nonparametric and parametric point and interval estimation procedures are given. Modeling and statistical estimation of reliability of systems from failure time and degradation data are considered.
Daktaro disertacijos tyrimo objektai yra rezervuotos sistemos ir degradaciniai modeliai. Norint užtikrinti svarbių sistemos elementų aukštą patikimumą, naudojami jų rezerviniai elementai, kurie gali būti įjungiami sugedus šiems pagrindiniams elementams. Rezerviniai elementai gali funkcionuoti skirtinguose režimuose: „karštame“, „šaltame“ arba „šiltame“. Disertacijoje yra nagrinėjamos sistemos su „šiltai“ rezervuotais elementais. Darbe suformuluojama rezervinio elemento „sklandaus įjungimo“ hipotezė ir konstruojami statistiniai kriterijai šiai hipotezei tikrinti. Nagrinėjami neparametrinio ir parametrinio taškinio bei intervalinio vertinimo uždaviniai. Disertacijoje nagrinėjami pakankamai bendri degradacijos modeliai, kurie aprašo elementų gedimų intensyvumą kaip funkciją kiek naudojamų apkrovų, tiek ir degradacijos lygio, kuri savo ruožtu modeliuojama naudojant stochastinius procesus.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Cooper, Stephen-Mark. "Statistical methods for resolving issues relevant to test and measurement reliability and validity in variables related to sport performance and physical fitness." Thesis, Cardiff Metropolitan University, 2006. http://hdl.handle.net/10369/7393.

Повний текст джерела
Анотація:
Sport performance is the result of a complex and challenging blend of many factors. Sport coaches and National Goveming Bodies (NGBs) of sport have begun to recognise that the most efficacious way of preparing athletes for competition is one based upon proven scientific methods and not upon trial and error judgements. Such a response flies in the face of much of the coaching folklore that has been passed down through the generations. Indeed, it is not so long ago that most sport coaches would treat the idea of support from a sport scientist with abject cynicism. Today, however, it is far more commonplace for individual athletes and teams of athletes, who aspire towards achieving superior optimal performances, their coaches and NGB advisors, to seek an input from sport scientists so that these athletes can achieve their full potential. The complex blend of component factors necessary for successful sport performance are activity specific, and this has led to the demand for the provision of assessment batteries that have proven specificity within the context of a particular sport. In addition, sport scientists require testing protocols to be duplicated, and for comparable data to be obtained when athletes are tested in different laboratories by different scientists and at different times throughout a preparatory and competitive season (MacDougall and Wenger, l99l). Even when scientists revert to data collection in the field, mainly because of convenience, information gathered about athletes might be less consistent, but it might well be more specific and upon which some key decisions can often be made. Clearly, athletes, their coaches, their NGB advisers and the sport scientists that support them each have concerns over performance enhancement and optimisation. Additionally, sport scientists themselves might well have a personal research agenda. It has to be acknowledged, therefore, that all of these stakeholders have an interest in the quality of the data collected and that these data should be relevant, consistent and accurate.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Kurtar, Ahmet Kursat. "Uncertainty Models For Vector Based Functional Curves And Assessing The Reliability Of G-band." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12608054/index.pdf.

Повний текст джерела
Анотація:
This study is about uncertainty medelling for vector features in geographic information systems (GIS). It has mainly two objectives which are about the band models used for uncertainty modelling . The first one is the assessment of accuracy of GBand model, which is the latest and the most complex uncertainty handling model for vector features. Some simulations and tests are applied to test the reliability of accuracy of G-Band with comparing Chrisman&rsquo
s epsilon band model, which is the most frequently used band model among the others. The tests are realized with two cases, testing with digitized lines by people and testing with randomly created lines with gaussian random number generator algorithm. So, the results can be examined in two different ways. The second aim of this thesis is development of band models for functional special curves. These functional curves are based on some mathematical models. Specifications of these curves are defined in the structure of Geographic Markup Language (GML) of Open GIS Consortium (OGC). They are arc, arc string, clothoid and cubic spline. Uncertainty for arc by three coordinates, arc string and cubic spline are modelled by G-Band. Arc by center point and clothoid are modelled by epsilon band. In this thesis, a commercial GIS API, GeoKIT is used to create the band geometries of functional curves, to perform some simulations and tests to make the comparison and to present the developed functionality as a desktop application. Band geomeiv tries are developed in the structure of API model which enable the functionality. Secondly, Matlab 2006a is used for technical computing to calculate multivariate normal cumulative density function (mvncdf) to be used in analyses and simulations.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Zhang, Xi. "Physical and statistical analysis of functional process variables for process control in semiconductor manufacturing." [Tampa, Fla] : University of South Florida, 2009. http://purl.fcla.edu/usf/dc/et/SFE0003176.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Tobón, Gómez Catalina. "Three-dimensional statistical shape models for multimodal cardiac image analysis." Doctoral thesis, Universitat Pompeu Fabra, 2011. http://hdl.handle.net/10803/37473.

Повний текст джерела
Анотація:
Las enfermedades cardiovasculares (ECVs) son la principal causa de mortalidad en el mundo Occidental. El interés de prevenir y tratar las ECVs ha desencadenado un rápido desarrollo de los sistemas de adquisición de imágenes médicas. Por este motivo, la cantidad de datos de imagen recolectados en las instituciones de salud se ha incrementado considerablemente. Este hecho ha aumentado la necesidad de herramientas automatizadas para dar soporte al diagnóstico, mediante una interpretación de imagen confiable y reproducible. La tarea de interpretación requiere traducir los datos crudos de imagen en parámetros cuantitativos, los cuales son considerados relevantes para clasificar la condición cardiaca de un paciente. Para realizar tal tarea, los métodos basados en modelos estadísticos de forma han recibido favoritismo dada la naturaleza tridimensional (o 3D+t) de las imágenes cardiovasculares. Deformando el modelo estadístico de forma a la imagen de un paciente, el corazón puede analizarse de manera integral. Actualmente, el campo de las imágenes cardiovasculares esta constituido por diferentes modalidades. Cada modalidad explota diferentes fenómenos físicos, lo cual nos permite observar el órgano cardiaco desde diferentes ángulos. El personal clínico recopila todas estas piezas de información y las ensambla mentalmente en un modelo integral. Este modelo integral incluye información anatómica y funcional que muestra un cuadro completo del corazón del paciente. Es de alto interés transformar este modelo mental en un modelo computacional capaz de integrar la información de manera global. La generación de un modelo como tal no es simplemente un reto de visualización. Requiere una metodología capaz de extraer los parámetros cuantitativos relevantes basados en los mismos principios técnicos. Esto nos asegura que las mediciones se pueden comparar directamente. Tal metodología debe ser capaz de: 1) segmentar con precisión las cavidades cardiacas a partir de datos multimodales, 2) proporcionar un marco de referencia único para integrar múltiples fuentes de información, y 3) asistir la clasificación de la condición cardiaca del paciente. Esta tesis se basa en que los modelos estadísticos de forma, y en particular los Modelos Activos de Forma, son un método robusto y preciso con el potencial de incluir todos estos requerimientos. Para procesar múltiples modalidades de imagen, separamos la información estadística de forma de la información de apariencia. Obtenemos la información estadística de forma a partir de una modalidad de alta resolución y aprendemos la apariencia simulando la física de adquisición de otras modalidades. Las contribuciones de esta tesis pueden ser resumidas así: 1) un método genérico para construir automáticamente modelos de intensidad para los Modelos Activos de Forma simulando la física de adquisición de la modalidad en cuestión, 2) la primera extensión de un simulador de Resonancia Magnética Nuclear diseñado para producir estudios cardiacos realistas, y 3) un método novedoso para el entrenamiento automático de modelos de intensidad y de fiabilidad aplicado a estudios cardiacos de Resonancia Magnética Nuclear. Cada una de estas contribuciones representa un artículo publicado o enviado a una revista técnica internacional.
Cardiovascular diseases (CVDs) are the major cause of death in the Western world. The desire to prevent and treat CVDs has triggered a rapid development of medical imaging systems. As a consequence, the amount of imaging data collected in health care institutions has increased considerably. This fact has raised the need for automated analysis tools to support diagnosis with reliable and reproducible image interpretation. The interpretation task requires to translate raw imaging data into quantitative parameters, which are considered relevant to classify the patient’s cardiac condition. To achieve this task, statistical shape model approaches have found favoritism given the 3D (or 3D+t) nature of cardiovascular imaging datasets. By deforming the statistical shape model to image data from a patient, the heart can be analyzed in a more holistic way. Currently, the field of cardiovascular imaging is constituted by different modalities. Each modality exploits distinct physical phenomena, which allows us to observe the cardiac organ from different angles. Clinicians collect all these pieces of information to form an integrated mental model. The mental model includes anatomical and functional information to display a full picture of the patient’s heart. It is highly desirable to transform this mental model into a computational model able to integrate the information in a comprehensive manner. Generating such a model is not simply a visualization challenge. It requires having a methodology able to extract relevant quantitative parameters by applying the same principle. This assures that the measurements are directly comparable. Such a methodology should be able to: 1) accurately segment the cardiac cavities from multimodal datasets, 2) provide a unified frame of reference to integrate multiple information sources, and 3) aid the classification of a patient’s cardiac condition. This thesis builds upon the idea that statistical shape models, in particular Active Shape Models, are a robust and accurate approach with the potential to incorporate all these requirements. In order to handle multiple image modalities, we separate the statistical shape information from the appearance information. We obtain the statistical shape information from a high resolution modality and include the appearance information by simulating the physics of acquisition of other modalities. The contributions of this thesis can be summarized as: 1) a generic method to automatically construct intensity models for Active Shape Models based on simulating the physics of acquisition of the given imaging modality, 2) the first extension of a Magnetic Resonance Imaging (MRI) simulator tailored to produce realistic cardiac images, and 3) a novel automatic intensity model and reliability training strategy applied to cardiac MRI studies. Each of these contributions represents an article published or submitted to a peer-review archival journal.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Yajima, Ayako. "Assessment of Soil Corrosion in Underground Pipelines via Statistical Inference." University of Akron / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=akron1435602696.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

MOMESSO, ROBERTA G. R. A. P. "Desenvolvimento e validação de um referencial metodológico para avaliação da cultura de segurança de organizações nucleares." reponame:Repositório Institucional do IPEN, 2017. http://repositorio.ipen.br:8080/xmlui/handle/123456789/28035.

Повний текст джерела
Анотація:
Submitted by Pedro Silva Filho (pfsilva@ipen.br) on 2017-11-22T16:34:17Z No. of bitstreams: 0
Made available in DSpace on 2017-11-22T16:34:17Z (GMT). No. of bitstreams: 0
A cultura de segurança na área nuclear é definida como o conjunto de características e atitudes da organização e dos indivíduos que fazem que, com uma prioridade insuperável, as questões relacionadas à proteção e segurança nuclear recebam a atenção assegurada pelo seu significado. Até o momento, não existem instrumentos validados que permitam avaliar a cultura de segurança na área nuclear. Em vista disso, os resultados da definição de estratégias para o seu fortalecimento e o acompanhamento do desempenho das ações de melhorias tornam-se difíceis de serem avaliados. Este trabalho teve como objetivo principal desenvolver e validar um instrumento para a avaliação da cultura de segurança de organizações nucleares, utilizando o Instituto de Pesquisas Energéticas e Nucleares como unidade de pesquisa e coleta de dados. Os indicadores e variáveis latentes do instrumento foram definidos utilizando como referência modelos de avaliação de cultura de segurança da área da saúde e área nuclear. O instrumento de coleta de dados proposto inicialmente foi submetido à avaliação por especialistas da área nuclear e, posteriormente, ao pré-teste com indivíduos que pertenciam à população pesquisada. A validação do modelo foi feita por meio da modelagem por equações estruturais utilizando o método de mínimos quadrados parciais (Partial Least Square - Structural Equation Modeling PLS-SEM), no software SmartPLS. A versão final do instrumento foi composta por quarenta indicadores distribuídos em nove variáveis latentes. O modelo de mensuração apresentou validade convergente, validade discriminante e confiabilidade e, o modelo estrutural apresentou significância estatística, demonstrando que o instrumento cumpriu adequadamente todas as etapas de validação.
Tese (Doutorado em Tecnologia Nuclear)
IPEN/T
Instituto de Pesquisas Energéticas e Nucleares - IPEN-CNEN/SP
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Cross, Richard J. (Richard John). "Inference and Updating of Probabilistic Structural Life Prediction Models." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19828.

Повний текст джерела
Анотація:
Aerospace design requirements mandate acceptable levels of structural failure risk. Probabilistic fatigue models enable estimation of the likelihood of fatigue failure. A key step in the development of these models is the accurate inference of the probability distributions for dominant parameters. Since data sets for these inferences are of limited size, the fatigue model parameter distributions are themselves uncertain. A hierarchical Bayesian approach is adopted to account for the uncertainties in both the parameters and their distribution. Variables specifying the distribution of the fatigue model parameters are cast as hyperparameters whose uncertainty is modeled with a hyperprior distribution. Bayes' rule is used to determine the posterior hyperparameter distribution, given available data, thus specifying the probabilistic model. The Bayesian formulation provides an additional advantage by allowing the posterior distribution to be updated as new data becomes available through inspections. By updating the probabilistic model, uncertainty in the hyperparameters can be reduced, and the appropriate level of conservatism can be achieved. In this work, techniques for Bayesian inference and updating of probabilistic fatigue models for metallic components are developed. Both safe-life and damage-tolerant methods are considered. Uncertainty in damage rates, crack growth behavior, damage, and initial flaws are quantified. Efficient computational techniques are developed to perform the inference and updating analyses. The developed capabilities are demonstrated through a series of case studies.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Koneru, Narendra. "Quantitative analysis of domain testing effectiveness." [Johnson City, Tenn. : East Tennessee State University], 2001. http://etd-submit.etsu.edu/etd/theses/available/etd-0404101-011933/unrestricted/koneru0427.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Zelenke, Brian Christopher. "An empirical statistical model relating winds and ocean surface currents : implications for short-term current forecasts." Thesis, Connect to the title online, 2005. http://hdl.handle.net/1957/2166.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Brooks, Jeremy. "A Singular Perturbation Approach to the Fitzhugh-Nagumo PDE for Modeling Cardiac Action Potentials." Digital Commons @ East Tennessee State University, 2011. https://dc.etsu.edu/honors/152.

Повний текст джерела
Анотація:
The study of cardiac action potentials has many medical applications. Dr. Dennis Noble first used mathematical models to study cardiac action potentials in the 1960s. We begin our study of cardiac action potentials with one form of the Fitzhugh-Nagumo partial differential equation. We use the non-classical method to produce a closed form solution for the decoupled Fitzhugh Nagumo equation. Using voltage recording data of action potentials in a cardiac myocyte and in purkinje fibers, we estimate parameter values for the closed form solution with standard linear and non-linear regression methods. Results are limited, thus leading us to perturb the solution to obtain a better fit. We turn to singular perturbation theory to justify our pole-based approach. Finally, we test our model on independent action potential data sets to evaluate our model and to draw conclusions on how our model can be applied.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Masiulaitytė, Inga. "Regresiniai ir degradaciniai modeliai patikimumo teorijoje ir išgyvenamumo analizėje." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20100527_134940-11585.

Повний текст джерела
Анотація:
Daktaro disertacijos tyrimo objektai yra rezervuotos sistemos ir degradaciniai modeliai. Norint užtikrinti svarbių sistemos elementų aukštą patikimumą, naudojami jų rezerviniai elementai, kurie gali būti įjungiami sugedus šiems pagrindiniams elementams. Rezerviniai elementai gali funkcionuoti skirtinguose režimuose: „karštame“, „šaltame“ arba „šiltame“. Disertacijoje yra nagrinėjamos sistemos su „šiltai“ rezervuotais elementais. Darbe suformuluojama rezervinio elemento „sklandaus įjungimo“ hipotezė ir konstruojami statistiniai kriterijai šiai hipotezei tikrinti. Nagrinėjami neparametrinio ir parametrinio taškinio bei intervalinio vertinimo uždaviniai. Disertacijoje nagrinėjami pakankamai bendri degradacijos modeliai, kurie aprašo elementų gedimų intensyvumą kaip funkciją kiek naudojamų apkrovų, tiek ir degradacijos lygio, kuri savo ruožtu modeliuojama naudojant stochastinius procesus.
In doctoral thesis redundant systems and degradation models are considered. To ensure high reliability of important elements of the system, the stand-by units can be used. These units are commuted and operate instead of the main failed unit. The stand-by units can function in the different conditions: “hot”, “cold” or “warm” reserving. In the thesis systems with “warm” stand-by units are analyzed. Hypotheses of smooth commuting are formulated and goodness-of-fit tests for these hypotheses are constructed. Nonparametric and parametric point and interval estimation procedures are given. Modeling and statistical estimation of reliability of systems from failure time and degradation data are considered.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Arendt, Christopher D. "Adaptive Pareto Set Estimation for Stochastic Mixed Variable Design Problems." Ft. Belvoir : Defense Technical Information Center, 2009. http://handle.dtic.mil/100.2/ADA499860.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Taoufik, Sanae. "Fiabilité et analyse de défaillance des tags RFID UHF passifs sous contraintes environnementales sévères." Thesis, Normandie, 2018. http://www.theses.fr/2018NORMR009/document.

Повний текст джерела
Анотація:
Ces dernières années, la technologie RFID (identification par radiofréquence) s’est fortement développée dans de nombreuses applications industrielles parmi lesquelles les secteurs de l’aéronautique et l’automobile où il y a une forte demande en systèmes d’auto-identification fonctionnant dans des environnements difficiles. Dans ce contexte, l'objectif de ces travaux de thèse est d'étudier les effets du stockage thermique sur la fiabilité des tags RFID UHF passifs. Pour ce faire nous avons adopté une méthodologie homogène contribuant de façon significative à atteindre nos objectifs. La première étape de cette méthodologie consistait à choisir le tag à tester, deux types de tags Web et Tageos provenant de deux fabricants différents ont été soumis à des tests de vieillissement accélérés sous différentes températures. La deuxième étape était de définir les paramètres des tests de vieillissement et de caractériser les tags vieillis. À l'aide d'un banc de mesure dédié, la puissance réfléchie par l’ensemble des tags vieillis est mesurée après chaque phase de vieillissement en fonction de la distance entre l’antenne du tag et celle du lecteur RFID. La puissance réfléchie diminue considérablement après chaque phase de vieillissement avec différentes dynamiques de dégradation pour tous les tags vieillis. Cette dynamique de dégradation dépend du type de tag testé et de la température de test. La dernière étape de la méthodologie comportait l’analyse statistique et physique de défaillance, des différences claires dans les modes, les mécanismes et les temps de défaillance entre les tags Web et Tageos ont été observées. L’analyse physique de défaillance par microscopie optique et MEB a révélé des fissures dans les conducteurs métalliques de l'antenne pour une partie des tags vieillis, cependant pour l’autre partie des tags, aucune défaillance de l'antenne n'a été observée. Des déformations au niveau de la matrice polymère de l'ACP ont été révélées, ce qui a modifié l'adaptation d'impédance entre le RFIC et l'antenne. Des simulations en utilisant le logiciel de modélisation multi-physique COMSOL a été mise en place dans le but de reproduire les mécanismes de défaillances révélés expérimentalement soit au niveau de l’antenne ou de la RFIC. Ces travaux de thèse ont démontré l'importance d'étudier les effets du stockage en haute température sur la fiabilité des tags RFID passifs. Les défaillances sont apparues plus rapidement et les tests ont coûté considérablement moins onéreux par rapport aux autres types de tests de vieillissement accélérés
Nowadays, RFID has strongly developed in many industrial applications, including the aeronautics and automotive sectors, where there is a strong demand for auto-identification systems operating in severe environments. In this context, the objective of this thesis is to study the effects of thermal storage on the reliability of passive UHF RFID tags. To achieve this, we adopted a consistent methodology. The first step of this methodology was to choose the tag under test. Two types of tags Web and Tageos from two different manufacturers are aged under high temperatures. The second step was to define the parameters of the aging tests and to characterize the aged tags. Using a dedicated measurement bench, the reflected power is measured after each aging phase for all tested tags to determine the power loss caused by the high temperature storage. Reflected power decrease significantly after each aging phase with different dynamics of degradation for all aged tags. This dynamics of degradation depends on the temperature test and the type of tag. The final step involved statistical and physical failure analysis. Clear differences about modes, mechanisms and failure times between Web and Tageos tags have been observed, it seems that Tageos tags are more reliable than Web tags. Failure analysis of the samples, using an optical microscope and SEM, has revealed, cracks in the antenna metallic conductors on a part of the aged tags. In another part of the tags, no failures in the antenna have been seen, but clear deformations at the polymer matrix of the ACP have been observed, thus changing the impedance matching between the RFIC and the antenna. Simulations using the COMSOL multiphysics software have been implemented in order to reproduce the experimental failure mechanisms. This thesis work has demonstrated the importance of studying the effects of high temperature storage on the reliability of passive RFID tags. Failures appeared faster and tests cost considerably less than other types of accelerated aging tests
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Clavareau, Julien. "Modélisation des stratégies de remplacement de composant et de systèmes soumis à l'obsolescence technologique." Doctoral thesis, Universite Libre de Bruxelles, 2008. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210482.

Повний текст джерела
Анотація:
Ce travail s’inscrit dans le cadre d’étude de la sûreté de fonctionnement.

La sûreté de fonctionnement est progressivement devenue partie intégrante de l’évaluation des performances des systèmes industriels. En effet, les pannes d’équipements, les pertes de production consécutives, et la maintenance des installations ont un impact économique majeur dans les entreprises. Il est donc essentiel pour un manager de pouvoir estimer de manière cohérente et réaliste les coûts de fonctionnement de l’entreprise, en tenant notamment compte des caractéristiques fiabilistes des équipements utilisés, ainsi que des coûts induits entre autres par le non-fonctionnement du système et la restauration des performances de ses composants après défaillance.

Le travail que nous avons réalisé dans le cadre de ce doctorat se concentre sur un aspect particulier de la sûreté de fonctionnement, à savoir les politiques de remplacement d’équipements basées sur la fiabilité des systèmes qu’ils constituent. La recherche menée part de l’observation suivante :si la littérature consacrée aux politiques de remplacement est abondante, elle repose généralement sur l’hypothèse implicite que les nouveaux équipements envisagés présentent les mêmes caractéristiques et performances que celles que possédaient initialement les composants objets du remplacement.

La réalité technologique est souvent bien différente de cette approche, quelle que soit la discipline industrielle envisagée. En effet, de nouveaux équipements sont régulièrement disponibles sur le marché ;ils assurent les mêmes fonctions que des composants plus anciens utilisés par une entreprise, mais présentent de meilleures performances, par exemple en termes de taux de défaillance, consommation d’énergie, " intelligence " (aptitude à transmettre des informations sur leur état de détérioration).

De plus, il peut devenir de plus en plus difficile de se procurer des composants de l’ancienne génération pour remplacer ceux qui ont été déclassés. Cette situation est généralement appelée obsolescence technologique.

Le but de ce travail est de prolonger et d’approfondir, dans le cadre de la sûreté de fonctionnement, les réflexions engagées par les différents articles présentés dans la section état de l’art afin de définir et de modéliser des stratégies de remplacements d’équipements soumis à obsolescence technologique. Il s’agira de proposer un modèle, faisant le lien entre les approches plus économiques et celles plus fiabilistes, permettant de définir et d’évaluer l’efficacité, au sens large, des différentes stratégies de remplacement des unités obsolètes. L’efficacité d’une stratégie peut se mesurer par rapport à plusieurs critères parfois contradictoires. Parmi ceux-ci citons, évidemment, le coût total moyen engendré par la stratégie de remplacement, seul critère considéré dans les articles cités au chapitre 2, mais aussi la façon dont ces coûts sont répartis au cours du temps tout au long de la stratégie, la variabilité de ces coûts autour de leur moyenne, le fait de remplir certaines conditions comme par exemple d’avoir remplacé toutes les unités d’une génération par des unités d’une autre génération avant une date donnée ou de respecter certaines contraintes sur les temps de remplacement.

Pour arriver à évaluer les différentes stratégies, la première étape sera de définir un modèle réaliste des performances des unités considérées, et en particulier de leur loi de probabilité de défaillance. Etant donné le lien direct entre la probabilité de défaillance d’un équipement et la politique de maintenance qui lui est appliquée, notamment la fréquence des maintenances préventives, leur effet, l’effet des réparations après défaillance ou les critères de remplacement de l’équipement, un modèle complet devra considérer la description mathématique des effets des interventions effectuées sur les équipements. On verra que la volonté de décrire correctement les effets des interventions nous a amené à proposer une extension des modèles d’âge effectif habituellement utilisés dans la littérature.

Une fois le modèle interne des unités défini, nous développerons le modèle de remplacement des équipements obsolètes proprement dit.

Nous appuyant sur la notion de stratégie K proposée dans de précédents travaux, nous verrons comment adapter cette stratégie K à un modèle pour lequel les temps d’intervention ne sont pas négligeables et le nombre d’équipes limité. Nous verrons aussi comment tenir compte dans le cadre de cette stratégie K d’une part des contraintes de gestion d’un budget demandant en général de répartir les coûts au cours du temps et d’autre part de la volonté de passer d’une génération d’unités à l’autre en un temps limité, ces deux conditions pouvant être contradictoires.

Un autre problème auquel on est confronté quand on parle de l’obsolescence technologique est le modèle d’obsolescence à adopter. La manière dont on va gérer le risque d’obsolescence dépendra fortement de la manière dont on pense que les technologies vont évoluer et en particulier du rythme de cette évolution. Selon que l’on considère que le temps probable d’apparition d’une nouvelle génération est inférieur au temps de vie des composants ou supérieur à son temps de vie les solutions envisagées vont être différentes. Lors de deux applications numériques spécifiques.

Nous verrons au chapitre 12 comment envisager le problème lorsque l’intervalle de temps entre les différentes générations successives est largement inférieur à la durée de vie des équipements et au chapitre 13 comment traiter le problème lorsque le délai entre deux générations est de l’ordre de grandeur de la durée de vie des équipements considérés.

Le texte est structuré de la manière suivante :Après une première partie permettant de situer le contexte dans lequel s’inscrit ce travail, la deuxième partie décrit le modèle interne des unités tel que nous l’avons utilisé dans les différentes applications. La troisième partie reprend la description des stratégies de remplacement et des différentes applications traitées. La dernière partie permet de conclure par quelques commentaires sur les résultats obtenus et de discuter des perspectives de recherche dans le domaine.


Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished

Стилі APA, Harvard, Vancouver, ISO та ін.
34

Zhang, Xiao. "Confidence Intervals for Population Size in a Capture-Recapture Problem." Digital Commons @ East Tennessee State University, 2007. https://dc.etsu.edu/etd/2022.

Повний текст джерела
Анотація:
In a single capture-recapture problem, two new Wilson methods for interval estimation of population size are derived. Classical Chapman interval, Wilson and Wilson-cc intervals are examined and compared in terms of their expected interval width and exact coverage properties in two models. The new approach performs better than the Chapman in each model. Bayesian analysis also gives a different way to estimate population size.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Gazendam, Albert Dirk. "The design of physical and logical topologies for wide-area WDM optical networks." Diss., 2003. http://hdl.handle.net/2263/23541.

Повний текст джерела
Анотація:
The objective of this dissertation is to investigate the factors that influence the design of wide-area wavelength division multiplexing (WDM) optical networks. Wide-area networks are presented as communication networks capable of transporting voice and data communication over large geographical areas. These networks typically span a whole country, region or even continent.The rapid development and maturation of WDM technology over the last decade have been well-received commercially and warrants the development of skills in the field of optical network design.The fundamental purpose of all communication networks and technologies is to satisfy the demand of end-users through the provisioning of capacity over shared and limited physical infrastructure. Consideration of the business aspects related to communications traffic and the grooming thereof are crucial to developing an understanding of customer requirements in terms of the selection and quality of services and applications. Extensive communication networks require complex management techniques that aim to ensure high levels of reliability and revenue generation.An integrated methodology is presented for the design of wide-area WDM optical networks. The methodology harnesses physical, logical, and virtual topologies together with routing and channel assignment (RCA) and clustering processes to enhance objectivity of the design process. A novel approach, based on statistical clustering using the Ward linkage as similarity metric, is introduced for solving the problem of determining the number and positions of the backbone nodes of a wide-area network, otherwise defined as the top level hub nodes of the multi-level network model. The influence of the geographic distribution of network traffic, and the intra/inter-cluster traffic ratios are taken into consideration through utilisation of modified gravity models and novel network node weighting.
Dissertation (MEng)--University of Pretoria, 2005.
Electrical, Electronic and Computer Engineering
unrestricted
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Park, Il Hyeok. "Latent growth models and reliability estimation of longitudinal physical performances." Thesis, 2002. http://hdl.handle.net/2429/13212.

Повний текст джерела
Анотація:
There are four purposes to this study. The first is to introduce Latent Growth Models (LGM) to Human Kinetics researchers. The second is to examine the merits and practical problems of LGM in the analysis of longitudinal physical performance data. The third purpose is to examine the developmental patterns of children's physical performances. The fourth purpose is to compare the capacity of the two most widely used longitudinal factor models, LGM and a quasi-simplex model, to accurately estimate reliability for longitudinal data under various conditions. In study 1, the first, second and third purposes of the study were accomplished, and in study 2, the fourth purpose was accomplished. In study 1, two longitudinal data sets were obtained, however, only one set was deemed appropriate for subsequent analyses. The data included seven physical performance variables, measured at five time points, from 210 children aged eight to twelve years, and five predictor variables of physical performances. The univariate LGM analyses revealed that the children's individual development over a 5-year period was adequately explained by either a Linear (jump-and-reach and sit-and-reach), Quadratic (flexed-arm hang), Cubic (standing long jump) or Unspecified Curve model (agility shuttle run, endurance shuttle run and 30-yard dash). The children improved in their physical performances between ages 8 and 12 except for flexibility, in which children's performance declined over time. Children showed considerable variations in the developmental rate and patterns of physical performances. Among the predictor variables, the test practice (the number of previous testing sessions) and age in months showed positive effects on the children's performance at the initial time point. A negative test practice effect on the development in physical performance was also found. The effect of other predictor variables varied for different performance variables. The multivariate analyses showed that the factor structure of three hypothesized factors, "Run", "Power" and "Motor Ability", holds at all five time points. However, only the change in the "Run" factor was adequately explained by the Unspecified Curve model. There were significant test practice, age, measured season and measured year effects on the performance at the initial time of testing, and significant test practice and measured year effects on the curve factor. The cross-validation procedure generally supported these findings. It was concluded that a LGM has several merits over traditional methods in the analysis of change in that a LGM provides an individual level of analysis, and thus allows one to test various research questions regarding the predictors of change, measurement error, and multivariate change. Additionally, it requires less strict statistical assumptions than traditional methods. Because of the merits of the LGM analysis used here, this study provided some interesting findings regarding children's development of physical performances— findings that were not detectable in previous studies because of the use of traditional statistical analyses. The difficulty in comparing non-nested models, and the unknown relationship between the change in indicator variables and the change in the factor in the analysis of multivariate "curve-of-factors" model were discussed as practical problems in the application of LGM. In study 2, several longimdinal developmental data sets with known parameters under various conditions were generated by computer. The conditions were varied by the magnitude of correlations between initial status and change, the magnitude of reliability, and the magnitude of correlated errors between time points. The data were analyzed using two models, a LGM and a simplex model, and the estimated reliability coefficients were compared. The simplex model overestimated the reliability in all conditions, while the LGM provided relatively accurate reliability estimates in almost all conditions. Neither the magnitude of correlation between the initial status and change nor the magnitude of reliability affected the reliability estimation, while the correlated errors leaded to an overestimation of reliability for both models. On the other hand, the magnitude of reliability showed a negative effect on the goodness-of-fit of the simplex model. It was concluded that a LGM, rather than the often used simplex model, be used for reliability analyses of longitudinal data.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Chipoyera, Honest Walter. "Inspection and replacement models for reliability and maintenance: filling in gaps." Thesis, 2017. http://hdl.handle.net/10539/23607.

Повний текст джерела
Анотація:
A thesis submitted in fulfillment of the requirements for the Degree of Doctor of Philosophy, School of Statistics and Actuarial Science, Faculty of Science University of Witwatersrand, Johannesburg. February 2017.
The work done in this thesis on finite planning horizon inspection models has demonstrated that with the advent of powerful computers these days it is possible to easily find an optimal inspection schedule when the lifetime distribution is known. For the case of system time to failure following a uniform distribution, a result for the maximum number of inspections for the finite planning models has been derived. If the time to failure follows an exponential distribution, it has been noted that periodically carrying out inspections may not result in maximization of expected profit. For the Weibull distributions family (of which the exponential distribution is a special case), evenly spreading the inspections over a given finite planning horizon may not lead to any serious prejudice in profit. The case of inspection models where inspections are of non-negligible duration has also been explored. The conditions necessary for inspections that are evenly spread over the entire planning horizon to be near-optimal when system time to failure either follows a uniform distribution or exponential distribution have been explored. Finite and infinite planning horizon models where inspections are imperfect have been researched on. Interesting observations on the impact of Type I and Type II errors in inspection have been made. These observations are listed on page 174. A clear and easy to implement road map on how to get an optimal inspection permutation in problems first discussed by Zuckerman (1989) and later reviewed by Qiu (1991) for both the undiscounted and discounted cases has been given. The only challenge envisaged when a system has a large number of components is that of computer memory requirements - which nowadays is fast being overcome. In particular, it has been clearly demonstrated that the impact of repair times and per unit of time repair costs on the optimal inspection permutation cannot be ignored. The ideas and procedures of determining optimal inspection permutations which have been developed in this thesis will no doubt lead to huge cost savings especially for systems where the cost of inspecting components is huge.
XL2018
Стилі APA, Harvard, Vancouver, ISO та ін.
38

"Determination of Dominant Failure Modes Using Combined Experimental and Statistical Methods and Selection of Best Method to Calculate Degradation Rates." Master's thesis, 2014. http://hdl.handle.net/2286/R.I.26838.

Повний текст джерела
Анотація:
abstract: This is a two part thesis: Part 1 of this thesis determines the most dominant failure modes of field aged photovoltaic (PV) modules using experimental data and statistical analysis, FMECA (Failure Mode, Effect, and Criticality Analysis). The failure and degradation modes of about 5900 crystalline-Si glass/polymer modules fielded for 6 to 16 years in three different photovoltaic (PV) power plants with different mounting systems under the hot-dry desert climate of Arizona are evaluated. A statistical reliability tool, FMECA that uses Risk Priority Number (RPN) is performed for each PV power plant to determine the dominant failure modes in the modules by means of ranking and prioritizing the modes. This study on PV power plants considers all the failure and degradation modes from both safety and performance perspectives, and thus, comes to the conclusion that solder bond fatigue/failure with/without gridline/metallization contact fatigue/failure is the most dominant failure mode for these module types in the hot-dry desert climate of Arizona. Part 2 of this thesis determines the best method to compute degradation rates of PV modules. Three different PV systems were evaluated to compute degradation rates using four methods and they are: I-V measurement, metered kWh, performance ratio (PR) and performance index (PI). I-V method, being an ideal method for degradation rate computation, were compared to the results from other three methods. The median degradation rates computed from kWh method were within ±0.15% from I-V measured degradation rates (0.9-1.37 %/year of three models). Degradation rates from the PI method were within ±0.05% from the I-V measured rates for two systems but the calculated degradation rate was remarkably different (±1%) from the I-V method for the third system. The degradation rate from the PR method was within ±0.16% from the I-V measured rate for only one system but were remarkably different (±1%) from the I-V measured rate for the other two systems. Thus, it was concluded that metered raw kWh method is the best practical method, after I-V method and PI method (if ground mounted POA insolation and other weather data are available) for degradation computation as this method was found to be fairly accurate, easy, inexpensive, fast and convenient.
Dissertation/Thesis
Masters Thesis Engineering 2014
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Rokneddin, Keivan. "RELIABILITY AND RISK ASSESSMENT OF NETWORKED URBAN INFRASTRUCTURE SYSTEMS UNDER NATURAL HAZARDS." Thesis, 2013. http://hdl.handle.net/1911/72034.

Повний текст джерела
Анотація:
Modern societies increasingly depend on the reliable functioning of urban infrastructure systems in the aftermath of natural disasters such as hurricane and earthquake events. Apart from a sizable capital for maintenance and expansion, the reliable performance of infrastructure systems under extreme hazards also requires strategic planning and effective resource assignment. Hence, efficient system reliability and risk assessment methods are needed to provide insights to system stakeholders to understand infrastructure performance under different hazard scenarios and accordingly make informed decisions in response to them. Moreover, efficient assignment of limited financial and human resources for maintenance and retrofit actions requires new methods to identify critical system components under extreme events. Infrastructure systems such as highway bridge networks are spatially distributed systems with many linked components. Therefore, network models describing them as mathematical graphs with nodes and links naturally apply to study their performance. Owing to their complex topology, general system reliability methods are ineffective to evaluate the reliability of large infrastructure systems. This research develops computationally efficient methods such as a modified Markov Chain Monte Carlo simulations algorithm for network reliability, and proposes a network reliability framework (BRAN: Bridge Reliability Assessment in Networks) that is applicable to large and complex highway bridge systems. Since the response of system components to hazard scenario events are often correlated, the BRAN framework enables accounting for correlated component failure probabilities stemming from different correlation sources. Failure correlations from non-hazard sources are particularly emphasized, as they potentially have a significant impact on network reliability estimates, and yet they have often been ignored or only partially considered in the literature of infrastructure system reliability. The developed network reliability framework is also used for probabilistic risk assessment, where network reliability is assigned as the network performance metric. Risk analysis studies may require prohibitively large number of simulations for large and complex infrastructure systems, as they involve evaluating the network reliability for multiple hazard scenarios. This thesis addresses this challenge by developing network surrogate models by statistical learning tools such as random forests. The surrogate models can replace network reliability simulations in a risk analysis framework, and significantly reduce computation times. Therefore, the proposed approach provides an alternative to the established methods to enhance the computational efficiency of risk assessments, by developing a surrogate model of the complex system at hand rather than reducing the number of analyzed hazard scenarios by either hazard consistent scenario generation or importance sampling. Nevertheless, the application of surrogate models can be combined with scenario reduction methods to improve even further the analysis efficiency. To address the problem of prioritizing system components for maintenance and retrofit actions, two advanced metrics are developed in this research to rank the criticality of system components. Both developed metrics combine system component fragilities with the topological characteristics of the network, and provide rankings which are either conditioned on specific hazard scenarios or probabilistic, based on the preference of infrastructure system stakeholders. Nevertheless, they both offer enhanced efficiency and practical applicability compared to the existing methods. The developed frameworks for network reliability evaluation, risk assessment, and component prioritization are intended to address important gaps in the state-of-the-art management and planning for infrastructure systems under natural hazards. Their application can enhance public safety by informing the decision making process for expansion, maintenance, and retrofit actions for infrastructure systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Algee-Hewitt, Bridget Frances Beatrice. "If and How Many 'Races'? The Application of Mixture Modeling to World-Wide Human Craniometric Variation." 2011. http://trace.tennessee.edu/utk_graddiss/1165.

Повний текст джерела
Анотація:
Studies in human cranial variation are extensive and widely discussed. While skeletal biologists continue to focus on questions of biological distance and population history, group-specific knowledge is being increasingly used for human identification in medico-legal contexts. The importance of this research has been often overshadowed by both philosophic and methodological concerns. Many analyses have been constrained in their scope by the limited availability of representative samples and readily criticized for adopting statistical techniques that require user-guidance and a priori information. A multi-part project is presented here that implements model-based clustering as an alternative approach for population studies using craniometric traits. This project also introduces the use of forced-directed graphing and mixture-based supervised classification methods as statistically robust and practically useful techniques. This project considers three well-documented craniometric sources, whose samples collectively permit large-scale analyses and tests of population structure at a variety of partitions and for different goals. The craniofacial measurements drawn from the world-wide data sets collected by Howells and Hanihara permit rigorous tests for group differences and cryptic population structure. The inclusion of modern American samples from the Forensic Anthropology Data Bank allows for investigations into the importance of biosocial race and biogeographic ancestry in forensic anthropology. Demographic information from the United States Census Bureau is used to contextualize these samples within the range of the racial diversity represented in the American population-at-large. This project's findings support the presence of population structure, the utility of finite mixture methods to questions of biological classification, and the validity of supervised discrimination methods as reliable tools. They also attest to the importance of context for producing the most useful information on identity and affinity. These results suggest that a meaningful relationship between statistically inferred clusters and predefined groups does exist and that population-informative differences in cranial morphology can be detected with measured degrees of statistical certainty, even when true memberships are unknown. They imply, in turn, that the estimation of biogeographic ancestry and the identification of biosocial race in forensic anthropology can provide useful information for modern American casework that can be evidenced by scientific methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії