Academic literature on the topic 'STEEP PROBABILITY'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'STEEP PROBABILITY.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "STEEP PROBABILITY"

1

Strauch, Ronda, Erkan Istanbulluoglu, Sai Siddhartha Nudurupati, Christina Bandaragoda, Nicole M. Gasparini, and Gregory E. Tucker. "A hydroclimatological approach to predicting regional landslide probability using Landlab." Earth Surface Dynamics 6, no. 1 (February 7, 2018): 49–75. http://dx.doi.org/10.5194/esurf-6-49-2018.

Full text
Abstract:
Abstract. We develop a hydroclimatological approach to the modeling of regional shallow landslide initiation that integrates spatial and temporal dimensions of parameter uncertainty to estimate an annual probability of landslide initiation based on Monte Carlo simulations. The physically based model couples the infinite-slope stability model with a steady-state subsurface flow representation and operates in a digital elevation model. Spatially distributed gridded data for soil properties and vegetation classification are used for parameter estimation of probability distributions that characterize model input uncertainty. Hydrologic forcing to the model is through annual maximum daily recharge to subsurface flow obtained from a macroscale hydrologic model. We demonstrate the model in a steep mountainous region in northern Washington, USA, over 2700 km2. The influence of soil depth on the probability of landslide initiation is investigated through comparisons among model output produced using three different soil depth scenarios reflecting the uncertainty of soil depth and its potential long-term variability. We found elevation-dependent patterns in probability of landslide initiation that showed the stabilizing effects of forests at low elevations, an increased landslide probability with forest decline at mid-elevations (1400 to 2400 m), and soil limitation and steep topographic controls at high alpine elevations and in post-glacial landscapes. These dominant controls manifest themselves in a bimodal distribution of spatial annual landslide probability. Model testing with limited observations revealed similarly moderate model confidence for the three hazard maps, suggesting suitable use as relative hazard products. The model is available as a component in Landlab, an open-source, Python-based landscape earth systems modeling environment, and is designed to be easily reproduced utilizing HydroShare cyberinfrastructure.
APA, Harvard, Vancouver, ISO, and other styles
2

Kwasniok, Frank. "Semiparametric maximum likelihood probability density estimation." PLOS ONE 16, no. 11 (November 9, 2021): e0259111. http://dx.doi.org/10.1371/journal.pone.0259111.

Full text
Abstract:
A comprehensive methodology for semiparametric probability density estimation is introduced and explored. The probability density is modelled by sequences of mostly regular or steep exponential families generated by flexible sets of basis functions, possibly including boundary terms. Parameters are estimated by global maximum likelihood without any roughness penalty. A statistically orthogonal formulation of the inference problem and a numerically stable and fast convex optimization algorithm for its solution are presented. Automatic model selection over the type and number of basis functions is performed with the Bayesian information criterion. The methodology can naturally be applied to densities supported on bounded, infinite or semi-infinite domains without boundary bias. Relationships to the truncated moment problem and the moment-constrained maximum entropy principle are discussed and a new theorem on the existence of solutions is contributed. The new technique compares very favourably to kernel density estimation, the diffusion estimator, finite mixture models and local likelihood density estimation across a diverse range of simulation and observation data sets. The semiparametric estimator combines a very small mean integrated squared error with a high degree of smoothness which allows for a robust and reliable detection of the modality of the probability density in terms of the number of modes and bumps.
APA, Harvard, Vancouver, ISO, and other styles
3

Hassairi, A., and A. Masmoudi. "Extension of the variance function of a steep exponential family." Journal of Multivariate Analysis 92, no. 2 (February 2005): 239–56. http://dx.doi.org/10.1016/j.jmva.2003.09.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Anees, M. T., K. Abdullah, M. N. M. Nawawi, N. A. N. Norulaini, M. I. Syakir, and A. K. M. Omar. "Soil erosion analysis by RUSLE and sediment yield models using remote sensing and GIS in Kelantan state, Peninsular Malaysia." Soil Research 56, no. 4 (2018): 356. http://dx.doi.org/10.1071/sr17193.

Full text
Abstract:
The present study used pixel-based soil erosion analysis through Revised Universal Soil Loss Equation (RUSLE) and a sediment yield model. The main motive of this study is to find soil erosion probability zones and accordingly prioritise watersheds using remote sensing and Geographic Information System (GIS) techniques in Kelantan state, Peninsular Malaysia. The catchment was divided into 82 watersheds and soil loss of the catchment was calculated. Soil loss and sediment yield were divided into five categories ranging from very low to very high. Maximum area of the very high soil-loss category was observed in uncultivated land and the maximum area of very low soil-loss category was in forest. Soil erosion probability zones were also divided into five categories in which 36.1% of the area experienced zero soil erosion and 20.1% and 17.8% represented very high and high probability zones respectively. The maximum very high and high probability zones were 61.6% and 28.5% of the watershed area respectively. Prioritisation was according to the area covered by very high and high soil erosion probability zones, which showed that out of 82 watersheds, two had the very high and high priority categories respectively. The overall results indicate that high rainfall and agricultural activities enhanced the soil erosion rate on steep slopes in the catchment. Pixel-based soil erosion analysis through remote sensing and GIS was a very effective tool in finding accurate causes of soil erosion. Furthermore, it was suggested that agricultural activities and deforestation should be stopped on steep slopes because of their contribution in increasing soil erosion.
APA, Harvard, Vancouver, ISO, and other styles
5

Tolaney, Sara M., Elizabeth Garrett-Mayer, Julia White, Victoria S. Blinder, Jared C. Foster, Laleh Amiri-Kordestani, E. Shelley Hwang, et al. "Updated Standardized Definitions for Efficacy End Points (STEEP) in Adjuvant Breast Cancer Clinical Trials: STEEP Version 2.0." Journal of Clinical Oncology 39, no. 24 (August 20, 2021): 2720–31. http://dx.doi.org/10.1200/jco.20.03613.

Full text
Abstract:
PURPOSE The Standardized Definitions for Efficacy End Points (STEEP) criteria, established in 2007, provide standardized definitions of adjuvant breast cancer clinical trial end points. Given the evolution of breast cancer clinical trials and improvements in outcomes, a panel of experts reviewed the STEEP criteria to determine whether modifications are needed. METHODS We conducted systematic searches of ClinicalTrials.gov for adjuvant systemic and local-regional therapy trials for breast cancer to investigate if the primary end points reported met STEEP criteria. On the basis of common STEEP deviations, we performed a series of simulations to evaluate the effect of excluding non–breast cancer deaths and new nonbreast primary cancers from the invasive disease–free survival end point. RESULTS Among 11 phase III breast cancer trials with primary efficacy end points, three had primary end points that followed STEEP criteria, four used STEEP definitions but not the corresponding end point names, and four used end points that were not included in the original STEEP manuscript. Simulation modeling demonstrated that inclusion of second nonbreast primary cancer can increase the probability of incorrect inferences, can decrease power to detect clinically relevant efficacy effects, and may mask differences in recurrence rates, especially when recurrence rates are low. CONCLUSION We recommend an additional end point, invasive breast cancer–free survival, which includes all invasive disease–free survival events except second nonbreast primary cancers. This end point should be considered for trials in which the toxicities of agents are well-known and where the risk of second primary cancer is small. Additionally, we provide end point recommendations for local therapy trials, low-risk populations, noninferiority trials, and trials incorporating patient-reported outcomes.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Jing, Lei Wan, Yan-Ling Dong, and Li-Xin Xie. "Outcomes of different lines of keratoconus management in a tertiary eye center in north China." International Journal of Ophthalmology 15, no. 4 (April 18, 2022): 568–75. http://dx.doi.org/10.18240/ijo.2022.04.07.

Full text
Abstract:
AIM: To evaluate the treatment selections and outcomes of keratoconus and discuss the grading treatment of keratoconus. METHODS: Medical records of 1162 patients (1863 eyes) with keratoconus treated with rigid gas permeable (RGP), corneal collagen crosslinking, and keratoplasty were reviewed. The patients were grouped according to the CLEK Study. The advanced group was further divided into a <60 D group and >60 D group. The best-corrected visual acuity (BCVA) and topographic data before and after treatment were recorded. RESULTS: In the 761 eyes with steep K<52 D, nonsurgical management accounted for 83.4%, while in the 735 eyes with steep K>60 D, surgical management accounted for 90.6%. A total of 618 eyes had improved BCVA at the final follow-up point (>18mo, P<0.001). When steep K was <52 D, the BCVA in the RGP group was better than those with lamellar keratoplasty (LKP; P=0.028). When steep K was >52 D, the BCVA and topographic astigmatism outcomes showed no differences among the treatment groups. When steep K was >60 D, the BCVA in eyes treated with LKP was worse than those with steep K<60 D (P=0.025). The incidence of steep K progression in the RGP group was higher in advanced group (20.0% vs 10.8%, P=0.019). The probability of future keratoplasty in RGP was higher in advanced group (14.8% vs 7.0%, P=0.027). The incidence of steep K progression in the corneal collagen crosslinking (CXL) group was higher in advanced group (32.3% vs 8.5%, P=0.007). Multivariate logistic regression revealed the following related factors for treatment options: steep K [odds ratio (OR)=1.208, 95%CI: 1.052-1.387], TA (OR=1.171, 95%CI: 1.079-1.270), and TCT (OR=0.978, 95%CI: 0.971-0.984). The level of steep K, TA, and TCT all relates to the treatment choices of both keratoplasty and non-keratoplasty, while steep K provided the highest diagnostic accuracy (AUC=0.947, P<0.001). CONCLUSION: Steep K is an important grading treatment indicator. When steep K is <52 D, RGP lenses should be recommended. It is the best time for LKP when the steep K ranges from 52 to 60 D.
APA, Harvard, Vancouver, ISO, and other styles
7

Tao, Hongliang, Guangli Xu, Jingwen Meng, Ronghe Ma, and Jiaxing Dong. "Stability Assessment of High and Steep Cutting Rock Slopes with the SSPC Method." Advances in Civil Engineering 2021 (April 20, 2021): 1–10. http://dx.doi.org/10.1155/2021/8889526.

Full text
Abstract:
The stability of high rock slopes has become a key engineering geological problem in the construction of important projects in mountainous areas. The original slope stability probability classification (SSPC) system, presented by Hack, has made obvious progress and been widely used in rock slope stability analysis. However, the selection and determination of some evaluation indexes in the original SSPC method are usually subjective, such as intact rock strength and weathering degree. In this study, the SSPC method based on geological data obtained in the prospecting tunnels was presented and applied. According to the field survey and exploration of the prospecting tunnels, the weathering degree of the slope rock mass was evaluated. The empirical equation for the maximum stable height of the slope was applied to the slope stability evaluation in the presented SSPC method. Then, the slope stability probability of numerous cutting slopes in the sandstone unit was evaluated using the presented system. Results of the Geostudio software based on the limited equilibrium analysis of the investigated slopes were compared with the results obtained by the SSPC method. The results indicate that the SSPC method is a useful tool for the stability prediction of high and steep rock slopes.
APA, Harvard, Vancouver, ISO, and other styles
8

Tiwari, Ram Chandra, and Netra Prakash Bhandary. "Stochastic Finite Element Analysis of Root-Reinforcement Effects in Long and Steep Slopes." Geotechnics 3, no. 3 (August 23, 2023): 829–53. http://dx.doi.org/10.3390/geotechnics3030045.

Full text
Abstract:
This article introduces a novel numerical scheme within the finite element method (FEM) to study soil heterogeneity, specifically focusing on the root–soil matrix in fracture treatments. Material properties, such as Young’s modulus of elasticity, cohesion, and the friction angle, are considered as randomly distributed variables. To address the inherent uncertainty associated with these distributions, a Monte Carlo simulation is employed. By incorporating the uncertainties related to material properties, particularly the root component that contributes to soil heterogeneity, this article provides a reliable estimation of the factor of safety, failure surface, and slope deformation, all of which demonstrate a progressive behavior. The probability distribution curve for the factor of safety (FOS) reveals that an increase in the root area ratio (RAR) results in a narrower range and greater certainty in the population mean, indicating reduced material variation. Moreover, as the slope angle increases, the sample mean falls within a wider range of the probability density curve, indicating an enhanced level of material heterogeneity. This heterogeneity amplifies the level of uncertainty when predicting the factor of safety, highlighting the crucial importance of accurate information regarding heterogeneity to enhancing prediction accuracy.
APA, Harvard, Vancouver, ISO, and other styles
9

Tromans, Peter S., and Luc Vanderschuren. "A Spectral Response Surface Method for Calculating Crest Elevation Statistics." Journal of Offshore Mechanics and Arctic Engineering 126, no. 1 (February 1, 2004): 51–53. http://dx.doi.org/10.1115/1.1641390.

Full text
Abstract:
The statistics of wave crest elevation in a random, directionally spread sea are calculated using a novel approach. The nonlinearity of steep waves is modelled to second order using Sharma and Dean kinematics and a spectral response surface method is used to deduce the crest elevation corresponding to a given probability of exceedance. The spectral response surface method works in the probability domain, making it several times faster than conventional time domain simulation of random waves. However, the results from the two methods show good agreement. As expected, nonlinearity makes extreme crests higher than the corresponding linear ones.
APA, Harvard, Vancouver, ISO, and other styles
10

Dahle, E. Aa, D. Myrhaug, and S. J. Dahl. "Probability of capsizing in steep and high waves from the side in open sea and coastal." Ocean Engineering 15, no. 2 (January 1988): 139–51. http://dx.doi.org/10.1016/0029-8018(88)90025-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "STEEP PROBABILITY"

1

Seyedali, Seyed Mohamad. "Getting a Grip on Scrap : Applying Probability and Statistics in Analyzing Scrap and Steel Composition Data from Electrical Steel Production." Thesis, KTH, Materialvetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-165413.

Full text
Abstract:
This study intends to better control the final composition of steel by trying to have a better knowledge of elements including copper, nickel, molybdenum, manganese, tin and chromium in the scrap. This objective was approached by applying probability and statistical concepts such as normal distribution, multiple linear regression and least square and non-negative least square concepts. The study was performed on the raw materials’ information of Ovako Smedjebacken and Ovako Hofors, two steel production plants in Sweden. The information included but were not limited to the amount of the different scrap types used in the charge, total weight of the charge and the final composition of the produced steel.  First, the concept of normal distribution was used as to consider the variations of the alloying elements between the estimated and measured alloy contents. The data were then used to consider a model for distribution factor of the studied elements. Also, an estimation of the alloy contents in the scrap type given the final steel composition was carried out using the concept of probability and statistics. At the end, a comparison of the results from the different concepts was done.
APA, Harvard, Vancouver, ISO, and other styles
2

Reynolds, James Bernard. "Advanced analysis and reliability-based design of steel scaffolding systems." Thesis, The University of Sydney, 2014. http://hdl.handle.net/2123/11626.

Full text
Abstract:
This thesis presents a comprehensive investigation of the advanced analysis, reliability-based design and optimisation of steel support scaffolding systems. Support scaffolding systems are used to provide temporary support to timber formwork systems, reinforcement, concrete, workmen and equipment, during the construction of permanent structures such as buildings and bridges. Stick-type steel scaffolds with cuplok joints are the focus of the thesis. This thesis includes the collection and statistical analysis of shore load effects occurring as a result of construction dead and live loads. A comprehensive series of U-head joint subassembly tests, allowed the top rotational stiffness to be rationally quantified for advanced finite element modelling. Advanced finite element models are calibrated using data compiled in a previous investigation involving eighteen full-scale tests. This calibration exercise also provides statistical data for modelling error. Monte Carlo simulations using advanced analysis are performed to determine the statistical distributions of system strength for a range of geometric configurations of support scaffold systems. The research showed that system strength was governed mainly by jack extension at the top and bottom of the scaffolding system. By incorporating the load statistics and system strength statistics, the thesis determined the reliability of various steel scaffolding systems designed by the fundamental Load-Resistance-Factor-Design (LRFD) equation. The study further proposed a more efficient LRFD equation for steel scaffolding, based on an acceptable target reliability index.
APA, Harvard, Vancouver, ISO, and other styles
3

Salgar, Chaparro Silvia Juliana. "Understanding of Microbiologically Influenced Corrosion in Carbon Steel Pipelines: Towards Developing a Methodology to Assess Probability of Failure." Thesis, Curtin University, 2020. http://hdl.handle.net/20.500.11937/81959.

Full text
Abstract:
This dissertation evaluated critical aspects of microbiologically influenced corrosion (MIC) and generated valuable information for the understanding and management of this corrosion threat. The main outcomes include a better understanding of the preservation requirements for field samples to obtain accurate results; an innovative approach for MIC assessment that consists of the identification of total and active microbial communities; knowledge of the effect that environmental and operational conditions can have on microbial communities and MIC.
APA, Harvard, Vancouver, ISO, and other styles
4

Chatterjee, Aritra. "Structural System Reliability with Application to Light Steel-Framed Buildings." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/74879.

Full text
Abstract:
A general framework to design structural systems for a system-reliability goal is proposed. Component-based structural design proceeds on a member to member basis, insuring acceptable failure probabilities for every single structural member without explicitly assessing the overall system safety, whereas structural failure consequences are related to the whole system performance (the cost of a building or a bridge destroyed by an earthquake) rather than a single beam or column failure. Engineering intuition tells us that the system is safer than each individual component due to the likelihood of load redistribution and al- ternate load paths, however such conservatism cannot be guaranteed without an explicit system-level safety check. As a result, component-based structural designs can lead to both over-conservative components and a less-than-anticipated system reliability. System performance depends on component properties as well as the load-sharing network, which can possess a wide range of behaviors varying from a dense redundant system with scope for load redistribution after failure initiates, to a weakest-link type network that fails as soon as the first member exceeds its capacity. The load-sharing network is characterized by its overall system reliability and the system-reliability sensitivity, which quantifies the change in system safety due to component reliability modifications. A general algorithm is proposed to calculate modified component reliabilities using the sensitivity vector for the load-sharing network. The modifications represent an improvement on the structural properties of more critical components (more capacity, better ductility), and provide savings on less important members which do not play a significant role. The general methodology is applied to light steel-framed buildings under seismic loads. The building is modeled with non-linear spring elements representing its subsystems. The stochastic response of this model under seismic ground motions provides load-sharing, system reliability and sensitivity information, which are used to propose target diaphragm and shear wall reliability to meet a building reliability goal. Finally, diaphragm target reliability is used to propose modified component designs using stochastic simulations on geometric and materially non-linear finite-element models including every individual component.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
5

Bonneric, Matthieu. "Etude de l'endommagement en fatigue de câbles d'acier sous sollicitations complexes." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC045.

Full text
Abstract:
Les câbles d’acier sont utilisés comme renforts au sein des pneumatiques poids lourds, et servent notamment à supporter les efforts dus à la pression de gonflage et au poids du véhicule. Un câble est un ensemble de fils d’acier perlitique assemblés en hélices sur différentes couches. Il existe donc de nombreuses possibilités d’assemblage pour définir l’architecture d’un câble. Lors de leur sollicitation en service, les câbles sont soumis à des chargements cycliques à l’origine d’un endommagement en fatigue. Dans un contexte de réduction de la consommation et d’allègement des véhicules, la compréhension des mécanismes impliqués représente donc un enjeu majeur pour les manufacturiers de pneumatiques, en vue d’optimiser l’architecture des câbles vis-à-vis de la tenue en fatigue. Un essai de flexion cyclique représentatif de la sollicitation en service a été mis au point. Les éprouvettes testées sont des nappes composites constituées de câbles alignés au sein d’une matrice de gomme. Des essais interrompus à différents stades de l’endommagement suivis d’observations ex-situ (tomographie à rayon X, MEB) ont été réalisés. Un modèle de simulation par éléments finis de la nappe composite a été développé en vue d’étudier les interactions filgomme. La comparaison des observations aux simulations a permis de comprendre la cinétique de l’endommagement des renforts lors d’une sollicitation de flexion cyclique.L’étude de chacun des mécanismes susceptibles de contribuer à l’endommagement d’un câble a permis d’expliquer la meilleure tenue en fatigue des architectures pénétrées par la gomme. Un outil probabiliste de prédiction de la durée de vie des câbles basé sur la propagation des défauts en surface des fils a été développé
Steel cables are used as reinforcements in heavy truck tires, in particular to support the forces resulting from the tire pressure and the vehicle's weight. A cable is a set of pearlitic steel wires assembled in helical form on different layers. There are therefore many assembly possibilities to define the cable architecture. The cables are subjected to cyclic loadings during service, resulting in fatigue damage. In a context of reduced fuel consumption and lighter vehicles, understanding the mechanisms involved is thus a major challenge for tire manufacturers, in order to optimize the architecture of cables with respect to fatigue resistance. A cyclic bending test representative of mechanical in-service loading has been developed. The tested specimens are composite layers made of cables aligned within an elastomer matrix. Interrupted tests at different stages of damage followed by ex-situ observations (X-ray tomography, SEM) were performed. A finite element model of the composite layer has been developed in order to understand wire-rubber interactions. The comparison of the observations with the simulations made it possible to understand the kinetics of cable damage during cyclic bending loading.The study of each of the mechanisms likely to contribute to the cable damage has made it possible to explain the better fatigue resistance of the architectures penetrated by the rubber. A stochastic cable fatigue life model based on wire surface defect propagation has been developed
APA, Harvard, Vancouver, ISO, and other styles
6

Rota, Bernardo João. "Calibration Adjustment for Nonresponse in Sample Surveys." Doctoral thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-51966.

Full text
Abstract:
In this thesis, we discuss calibration estimation in the presence of nonresponse with a focus on the linear calibration estimator and the propensity calibration estimator, along with the use of different levels of auxiliary information, that is, sample and population levels. This is a fourpapers- based thesis, two of which discuss estimation in two steps. The two-step-type estimator here suggested is an improved compromise of both the linear calibration and the propensity calibration estimators mentioned above. Assuming that the functional form of the response model is known, it is estimated in the first step using calibration approach. In the second step the linear calibration estimator is constructed replacing the design weights by products of these with the inverse of the estimated response probabilities in the first step. The first step of estimation uses sample level of auxiliary information and we demonstrate that this results in more efficient estimated response probabilities than using population-level as earlier suggested. The variance expression for the two-step estimator is derived and an estimator of this is suggested. Two other papers address the use of auxiliary variables in estimation. One of which introduces the use of principal components theory in the calibration for nonresponse adjustment and suggests a selection of components using a theory of canonical correlation. Principal components are used as a mean to accounting the problem of estimation in presence of large sets of candidate auxiliary variables. In addition to the use of auxiliary variables, the last paper also discusses the use of explicit models representing the true response behavior. Usually simple models such as logistic, probit, linear or log-linear are used for this purpose. However, given a possible complexity on the structure of the true response probability, it may raise a question whether these simple models are effective. We use an example of telephone-based survey data collection process and demonstrate that the logistic model is generally not appropriate.
APA, Harvard, Vancouver, ISO, and other styles
7

Zarov, Filipp. "Modeling fault probability in single railroad turnouts in Eastern Region, Sweden, with the use of logistic regression models : A step from preventive to predictive preventive maintenance in railway maintenance planning." Thesis, KTH, Transportplanering, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273639.

Full text
Abstract:
Turnouts are an important part of railway infrastructure for two reasons: infrastructure andmaintenance. For the infrastructure they provide the flexibility to allow the formulation and branchingof railway network and for maintenance they consume a large part of maintenance budget and have aprominent place in maintenance planning policy and activities. This is because as a “mechanical object”,a turnout often experiences malfunctions. The problem becomes even more complicated, since a turnoutis composed of many different parts and each of them fails for very different reasons (e.g. switch bladesvs crossing part). This is reflected in the different needs for maintenance activities, as railways areforced to pour in excessive amounts of resources to carry out emergency repairs, or to carry outunnecessary scheduled maintenance works in turnouts, which do not need to be inspected or repaired.Therefore, it is difficult to plan and organize maintenance activities in turnouts in an efficient manner.This raises the question of whether malfunctions in turnouts can be predicted and used as informationfor the maintenance planning process in order to optimize it and develop it into a more reliablepreventive maintenance planning.The aim of this analysis is to attempt to model the probability of various malfunctions in turnouts asa function of their main geometric and operational characteristics by using logistic regression modelsand then input these results into the maintenance planning process in order to optimize it. First, it wasimportant to objectify the railway track system and the turnout components, both in terms of parts andinterrelationships. Furthermore, the process and basic elements of railway maintenance planning weredefined, as well as arguments that motivate a turn towards preventive maintenance planningmethodologies. This was done through a comprehensive literature study.The basis of this research was case studies, which described the relationship between geometricaland operational characteristics of turnouts and their wear, as well as risk-based modelling methods inrailway maintenance planning. To create the analysis model, data from turnouts in eastern regionprovided by the Swedish Transport Administration were used, both from the point of view of describingthe underlying causes of turnout malfunctions and to formulate an object-oriented database suitable forusing in logistic regression models. The goal was a logit model that calculated the malfunctionprobability of a turnout, which could be used directly into a maintenance planning framework, whichranked maintenance activities in turnouts.The results obtained showed that although the model suffers from low correlation, differentrelationships between input variables and different functional errors were established. Furthermore, thepotential of these analytical models and modeling structures was shown to be able to developpreventive, predictive railway maintenance plans, but further analysis of the data structure is required,especially regarding data quality. Finally, further possible research areas are presented.
Spårväxlar är viktiga delar av järnvägens infrastruktur av två orsaker: infrastruktur och underhåll.För infrastrukturen ger de möjlighet till flexibla tillåter de formulering och grenning av järnvägsnät ochför underhållet konsumerar de en stor del av underhållsbudgeten och de har en framträdande plats iunderhållsplaneringspolitiken och aktiviteterna. Detta beror på att som ett ”maskinellt objekt”, harspårväxeln ofta fel. Problemet blir ännu mer komplicerat, eftersom en spårväxel består av många olikadelar och var och en av dem bryts ner av mycket olika skäl (t.ex. tunganordning vs korsningsdel). Dettaåterspeglas i olika behov av underhållsaktiviteter. Eftersom järnvägarna tvingas hålla alltför storamängder resurser för att utföra akuta reparationer eller för att utföra onödiga schemalagdaunderhållsarbeten i spårväxlar, som inte behöver inspekteras eller repareras. Därför är det svårt attplanera och organisera underhållsaktiviteter för spårväxlarna på ett effektivt sätt. Detta ställer fråganom funktionsfel i spårväxlar kan förutsägas och användas som information till  underhållsplaneringsprocessen för att optimera den och utveckla den till en pålitligare förebyggandeunderhållsplanering.Syftet med denna analys är att försöka modellera sannolikheten för olika funktionsfel i spårväxlarsom en funktion av deras huvudsakliga geometriska och operativa egenskaper med användning avlogistiska regressionsmodeller och sedan mata dessa resultat in i underhållsplaneringsprocessen för attoptimera den. För det första var det viktigt att objektifiera järnvägsspårsystemet ochspårväxlarkomponenterna, både vad gäller delar och inbördes förhållanden. Dessutom definieradesprocessen och grundelementen i järnvägsunderhållsplaneringen, samt att argument som motiverarförändring till förebyggande underhållsplaneringsmetoder. Detta gjordes genom en omfattandelitteraturstudie.Grunden i denna analys var fallstudier, som beskrev förhållandet mellan geometriska ochoperationella egenskaper hos spårväxlar och deras förslitning samt riskbaserade modelleringsmetoder ijärnvägsunderhållsplanering. För att skapa analysmodellen användes data från spårväxlar i östraregionen som tillhandahölls av Trafikverket, både ur synpunkten att beskriva de underliggandeorsakerna till spårväxlarsfel och för att formulera en objektorienterad databas lämplig för användning ilogistiska regressionsmodeller. Målet var en logitmodell som beräknade sannolikheten för fel i enspårväxel, som kunde användas direkt i en underhållsplaneringsram, som rangordnar lämpigaunderhållsaktiviteter i spårväxlar.Erhållna resultat visade att även om modellen lider av låg korrelation, konstaterades olika sambandmellan ingående variabler och olika funktionsfel. Vidare visades potentialen hos dessa analysmodelleroch modelleringsstrukturer för att kunna utveckla förebyggande, förutsägbarajärnvägsunderhållsplaner, men det krävs troligtvis ytterligare analys av datastrukturen, specielltangående datakvaliteten. Slutligen presenteras ytterligare möjliga forskningsområden.
APA, Harvard, Vancouver, ISO, and other styles
8

Alves, Clever Gama. "Analise de confiabilidade em fadiga : estudo de caso : braço de controle de suspensão automotiva." [s.n.], 2008. http://repositorio.unicamp.br/jspui/handle/REPOSIP/264943.

Full text
Abstract:
Orientador: Itamar Ferreira
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica
Made available in DSpace on 2018-08-11T17:15:29Z (GMT). No. of bitstreams: 1 Alves_CleverGama_M.pdf: 6160446 bytes, checksum: 5afa2cd3848388247de70ceee21a3cf8 (MD5) Previous issue date: 2008
Resumo: O presente trabalho descreve um procedimento de aprovação de um braço de controle de suspensão automotiva sujeito a fadiga de alto ciclo, ao mesmo tempo em que propõe uma sistemática alternativa de validação baseada em conceitos e teorias estatísticas de confiabilidade. Nesse aspecto, a pesquisa não só avalia o procedimento seguido pelo fabricante, como também executa comparações gráfico-analíticas de distribuições probabilísticas (normal, lognormal e Weibull) a fim de caracterizar a massa de dados completos e suspensos obtidos em ensaios acelerados de bancada. Um espaço amostral constituído por quatro observações completas da configuração final da peça e oito da inicial, complementado por doze dados suspensos, foi usado para determinar os parâmetros dos modelos. Essa análise levou à escolha do modelo de Weibull bi-paramétrico para o tempo até a falha para as duas configurações em foco. A estimação final dos parâmetros foi feita pelo método da máxima verossimilhança, o qual superou um método alternativo específico para Weibull na comparação com a distribuição referencial de categoria. Dessa forma, calculou-se o ganho efetivo em confiabilidade conseguido com o esforço adicional de desenvolvimento da peça. O teste de hipóteses de Kruskal-Wallis permitiu concluir que as duas configurações realmente possuem performances de durabilidade diferentes. É notável o ganho em confiabilidade obtido por meio das mudanças que levaram à configuração final: em um universo de um milhão de peças, o número de falhas esperadas aos 30.000 ciclos caiu de 96.384 para 5 partes por milhão
Abstract: This dissertation aims at describing a procedure for approval of a suspension control arm subjected to high-cycle fatigue and, simultaneously, at proposing a validation alternative method based on reliability concepts and statistical theories. In that manner, the research provides not only an assessment of the procedure followed by the manufacturer, but also analytical and graphical comparisons of probabilistic distributions (normal, lognormal, Weibull) in order to characterize the set of complete and suspended data from bench accelerated tests. A sample space comprised by four complete final configuration observations and eight complete primary configuration ones, in addition to twelve suspended figures, was the basis for determining the model parameters. Such an analysis led to choose the bi-parametric Weibull for both focused configurations¿ time to failure. The ultimate estimation of the parameters was performed through the maximum likelihood method, which beat a specific alternative method for Weibull when compared with the referential category distribution. Thus, the effective gain in reliability resulting from the product development additional effort was calculated. The Kruskal-Wallis hypothesis testing guided the conclusion that the two configurations actually have different durability performances. It is impressive the gain in reliability brought by the changes that led towards the final configuration: taking an amount of one million parts into consideration, the expected number of failures at 30,000 cycles dropped from 96,384 to 5 parts per million
Mestrado
Materiais e Processos de Fabricação
Mestre em Engenharia Mecânica
APA, Harvard, Vancouver, ISO, and other styles
9

Fox, Clayton D. L. "Modeling Simplified Reaction Mechanisms using Continuous Thermodynamics for Hydrocarbon Fuels." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37554.

Full text
Abstract:
Commercial fuels are mixtures with large numbers of components. Continuous thermodynamics is a technique for modelling fuel mixtures using a probability density function rather than dealing with each discreet component. The mean and standard deviation of the distribution are then used to model the chemical reactions of the mixture. This thesis develops the necessary theory to apply the technique of continuous thermodynamics to the oxidation reactions of hydrocarbon fuels. The theory is applied to three simplified models of hydrocarbon oxidation: a global one-step reaction, a two-step reaction with CO as the intermediate product, and the four-step reaction of Müller et al. (1992), which contains a high- and a low-temperature branch. These are all greatly simplified models of the complex reaction kinetics of hydrocarbons, and in this thesis they are applied specifically to n-paraffin hydrocarbons in the range from n-heptane to n-hexadecane. The model is tested numerically using a simple constant pressure homogeneous ignition problem using Cantera and compared to simplified and detailed mechanisms for n-heptane. The continuous thermodynamics models are able not only to predict ignition delay times and the development of temperature and species concentrations with time, but also changes in the mixture composition as reaction proceeds as represented by the mean and standard deviation of the distribution function. Continuous thermodynamics is therefore shown to be a useful tool for reactions of multicomponent mixtures, and an alternative to the "surrogate fuel" approach often used at present.
APA, Harvard, Vancouver, ISO, and other styles
10

Guerchais, Raphaël. "Influence d'accidents géométriques et du mode de chargement sur le comportement en fatigue à grand nombre de cycles d'un acier inoxydable austénitique 316L." Thesis, Paris, ENSAM, 2014. http://www.theses.fr/2014ENAM0020/document.

Full text
Abstract:
L'objectif de ces travaux de thèse est d'étudier l'influence de la microstructure et de défauts géométriques sur le comportement en fatigue à grand nombre de cycles (FGNC) d'un acier inoxydable austénitique 316L. La méthodologie proposée s'appuie sur des simulations par éléments finis (EF) d'agrégats polycristallins qui permettent de décrire les champs mécaniques à l'échelle des mécanismes impliqués dans les processus d'amorçage de fissures de fatigue.Une étude numérique préliminaire, s'appuyant sur des données expérimentales issues de la littérature, est conduite sur un cuivre électrolytique à l'aide de simulations numériques d'agrégats polycristallins en 2D. L'effet du trajet de chargement et de défauts artificiels de taille proche ou légèrement supérieure à celle de la microstructure sur les réponses mécaniques mésoscopiques sont analysés. Les capacités de prédiction de quelques critères de fatigue, s'appuyant sur des quantités mécaniques mésoscopiques, sont évaluées. Il est mis en évidence que les limites de fatigue macroscopiques prédites par un critère de fatigue probabiliste sont en accord avec les tendances expérimentales observées en fatigue multiaxiale et en présence de défauts.Une campagne expérimentale a été menée sur un acier austénitique 316L. Des essais de fatigue oligocyclique sont conduits afin de caractériser le comportement élasto-plastique du matériau. Des essais de FGNC, utilisant des éprouvettes avec et sans défaut de surface (défaut artificiel hémisphérique) ont été effectués pour estimer les limites de fatigue dans différentes conditions de sollicitation (traction, torsion, traction et torsion combinée, traction biaxiale) et pour plusieurs rayons de défaut. Dans le but de compléter la caractérisation du matériau, la microstructure est étudiée à l'aide d'analyses EBSD et la texture cristallographique est mesurée par diffraction des rayons X. Ces résultats expérimentaux sont utilisés pour reproduire, avec des simulations EF, les essais de FGNC sur des microstructures 2D et 3D représentatives de l'acier austénitique. L'hétérogénéité de quantités mécaniques mésoscopiques pertinentes en fatigue est discutée avec une attention particulière sur l'effet des défauts. L'approche probabiliste est appliquée aux résultats des modèles EF pour quantifier l'effet de la taille du défaut, pour différents trajets de chargement. La pertinence, vis-à-vis des observations expérimentales, des distributions de la limite de fatigue prédites est évaluée
The aim of this study is to analyze the influence of both the microstructure and defects on the high cycle fatigue (HCF) behaviour of a 316L austenitic stainless steel thanks to finite element (FE) simulations of polycrystalline aggregates.%The scatter encountered in the HCF behavior of metallic materials is often explained by the anisotropic elasto-plastic behavior of individual grains leading to a highly heterogeneous distribution of plastic slip.Since fatigue crack initiation is a local phenomenon, intimately related to the plastic activity at the crystal scale, it seems relevant to rely on this kind of modeling to evaluate the mechanical quantities.A preliminary numerical study, based on experimental data drawn from the litterature, was conducted on an electrolytic copper using simulations of 2D polycrystalline aggregates. The effect of the loading path and small artificial defects on the mesoscopic mechanical responses have been analyzed separately. Moreover, the predictive capabilities of some fatigue criteria, relying on the mesoscopic mechanical responses, has been evaluated. It was shown that the macroscopic fatigue limits predicted by a probabilistic fatigue criterion are in accordance with the experimental trends observed in multiaxial fatigue or in the presence of small defects.An experimental campaign is undertaken on an austenitic steel 316L. Low cycle fatigue tests are conducted in order to characterize the elasto-plastic behavior of the material. Load-controled HCF tests, using both smooth specimens and specimens containing an artificial hemispherical surface defect, are carried out to estimate the fatigue limits under various loading conditions (tension, torsion, combined tension and torsion, biaxial tension) and several defect radii. To complete the characterization of the material, the microstructure is studied thanks to EBSD analyzes and the cristallographic texture is measured by X-ray diffraction. These experimental data are used to reproduce, with FE simulations, the HCF tests on 2D and 3D microstructures representative of the austenitic steel. The heterogeneity of the mesoscopic mechanical quantities relevant in fatigue are discussed in relation to the modeling. The results from the FE models are then used along with the probabilistic mesomechanics approach to quantify the defect size effect for several loading paths. The relevance, with respect to the experimental observations, of the predicted fatigue strength distributions is assessed
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "STEEP PROBABILITY"

1

Ronchetti, Elvezio. Step 1: Statistique et probabilités : une introduction. 2nd ed. Lausanne: Presses polytechniques et universitaires romandes, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fowler, C. W. Constructing species frequency distributions: A step toward systemic management. [Seattle, Wash.]: U.S. Dept. of Commerce, National Oceanic and Atmospheric Administration, National Marine Fisheries Service, Alaska Fisheries Science Center, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

M, Kerry Sally, ed. Presenting medical statistics: From proposal to publication : a step-by-step guide. Oxford: Oxford University Press, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

B, Schlegelmilch Bodo, ed. Taking the fear out of data analysis: A step-by-step approach. London: Dryden Press, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Diamantopoulos, Adamantios. Taking the fear out of data analysis: A step-by-step approach. [London]: Business Press, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Diamantopoulos, Adamantios. Taking the fear out of data analysis: A step-by-step approach. London: Thomson Learning, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

SPSS survival manual: A step by step guide to data analysis using SPSS for Windows. 3rd ed. Maidenhead, England: McGraw Hill/Open University Press, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

SPSS survival manual: A step by step guide to data analysis using SPSS for Windows (version 12). 2nd ed. Maidenhead, Berkshire. U.K: Open University Press, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pallant, Julie. SPSS survival manual: A step by step guide to data analysis using SPSS for WIndows (versions 10 and 11). Maidenhead: Open University Press, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ma, Jin. Forward-backward stochastic differential equations and their applications. Berlin: Springer, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "STEEP PROBABILITY"

1

Dacunha-Castelle, Didier, and Marie Duflo. "Step by Step Decisions." In Probability and Statistics, 207–48. New York, NY: Springer New York, 1986. http://dx.doi.org/10.1007/978-1-4612-4870-5_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bizjak, Aleš, and Lars Birkedal. "Step-Indexed Logical Relations for Probability." In Lecture Notes in Computer Science, 279–94. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-46678-0_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Golić, Jovan Dj, and Renato Menicocci. "Edit Probability Correlation Attack on the Alternating Step Generator." In Sequences and their Applications, 213–27. London: Springer London, 1999. http://dx.doi.org/10.1007/978-1-4471-0551-0_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Knight, F. B. "Poisson representation of strict regular step filtrations." In Séminaire de Probabilités XX 1984/85, 1–27. Berlin, Heidelberg: Springer Berlin Heidelberg, 1986. http://dx.doi.org/10.1007/bfb0075706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Marden, John I., and Michael D. Perlman. "On the Inadmissibility of the Modified Step-Down Test Based on Fisher’s Method For Combining Independentp-Values." In Contributions to Probability and Statistics, 472–85. New York, NY: Springer New York, 1989. http://dx.doi.org/10.1007/978-1-4612-3678-8_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Marwitz, Florian Andreas, Tanya Braun, and Ralf Möller. "A First Step Towards Even More Sparse Encodings of Probability Distributions." In Inductive Logic Programming, 183–92. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-97454-1_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Walrand, Jean. "Route Planning: A." In Probability in Electrical Engineering and Computer Science, 243–57. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-49995-2_13.

Full text
Abstract:
AbstractThis chapter is concerned with making successive decisions in the presence of uncertainty. The decisions affect the cost at each step but also the “state” of the system. We start with a simple example: choosing a route with uncertain travel times. We then examine a more general model: controlling a Markov chain.Section 13.1 presents a model of route section when the travel times are random. Section 13.2 shows one formulation where one plans the trip long in advance. Section 13.3 explains how the problem changes if one is able to adjust the route based on real-time information. That section introduces the main ideas of stochastic dynamic programming. Section 13.4 discusses a generalization of the route planning problem: a Markov decision problem. Section 13.5 solves the problem when the horizon is infinite.
APA, Harvard, Vancouver, ISO, and other styles
8

Checchi, Daniele, and Tindaro Cicero. "Is Entering Italian Academia Getting Harder?" In Teaching, Research and Academic Careers, 107–34. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-07438-7_5.

Full text
Abstract:
AbstractWhile a PhD degree is often considered the first necessary step to an academic career, since 2010 only a small fraction (less than 10%) of doctoral graduates obtained a position in academia within six years of the award of their degree. While we do not have information on their labour market outcomes, we can examine the determinants of this transition in order to study whether entry to an academic job is becoming more difficult. We merge three national administrative data archives covering completed doctoral degrees, postdoc collaborations and new hirings to academia (mostly assistant professor level). We find a decline in appointment probability after 2010, due to the hiring freeze imposed by fiscal austerity. We find, also, that a PhD degree and postdoc experience have a positive effect on the probability of obtaining a position in academia, while being a woman or being a foreign-born candidate has a negative effect. We found no evidence of career disadvantages for candidates from Southern universities.
APA, Harvard, Vancouver, ISO, and other styles
9

Gu, Wenguo, Qiaogen Zhang, and Yuchang Qiu. "Leader Step Time and Low Probability Impulse Breakdown Voltage Measured in SF6 Gas Mixtures." In Gaseous Dielectrics VIII, 181–87. Boston, MA: Springer US, 1998. http://dx.doi.org/10.1007/978-1-4615-4899-7_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yazdi, Mohammad, and Esmaeil Zarei. "Step Forward on How to Treat Linguistic Terms in Judgment in Failure Probability Estimation." In Linguistic Methods Under Fuzzy Information in System Safety and Reliability Analysis, 193–200. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-93352-4_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "STEEP PROBABILITY"

1

Zhang, Zhao-jiang. "Prediction of Subsidence in Steep Seam Mining Based on Probability Integral." In 2010 2nd International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC). IEEE, 2010. http://dx.doi.org/10.1109/ihmsc.2010.45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Ming, Xiaoyan Liu, and Lin Sun. "Non-probability interval model based estimation of flying gangue motion in steep seam mining." In 6th International Workshop on Advanced Algorithms and Control Engineering (IWAACE 2022), edited by Daowen Qiu, Xuexia Ye, and Ning Sun. SPIE, 2022. http://dx.doi.org/10.1117/12.2652518.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Verhulst, A. S., D. Verreck, W. G. Vandenberghe, Q. Smets, M. Mohammed, J. Bizindavyi, M. M. Heyns, B. Soree, N. Collaert, and A. Mocuta. "Inherent transmission probability limit between valence-band and conduction-band states and calibration of tunnel-FET parasitics." In 2017 Fifth Berkeley Symposium on Energy Efficient Electronic Systems & Steep Transistors Workshop (E3S). IEEE, 2017. http://dx.doi.org/10.1109/e3s.2017.8246193.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tromans, Peter S., and Luc Vanderschuren. "A Spectral Response Surface Method for Calculating Crest Elevation Statistics." In ASME 2002 21st International Conference on Offshore Mechanics and Arctic Engineering. ASMEDC, 2002. http://dx.doi.org/10.1115/omae2002-28535.

Full text
Abstract:
The statistics of wave crest elevation in a random, directionally spread sea are calculated using a novel approach. The non-linearity of steep waves is modelled to second order using Sharma and Dean kinematics and a spectral response surface method is used to deduce the crest elevation corresponding to a given probability of exceedance. The spectral response surface method works in the probability domain, making it several times faster than conventional time domain simulation of random waves. However, the results from the two methods show good agreement. As expected, non-linearity makes extreme crests higher than the corresponding linear ones.
APA, Harvard, Vancouver, ISO, and other styles
5

Toffoli, A., S. Chai, E. M. Bitner-Gregersen, and F. Pistani. "Probability of Occurrence of Extreme Waves in Three Dimensional Mechanically Generated Random Wave Fields: A Comparison With Numerical Simulations." In ASME 2011 30th International Conference on Ocean, Offshore and Arctic Engineering. ASMEDC, 2011. http://dx.doi.org/10.1115/omae2011-49198.

Full text
Abstract:
Experimental and numerical investigations reveal that nonlinear modulational instability can significantly affect the probability of occurrence of extreme waves, especially if waves are sufficiently steep and narrow banded both in the frequency and directional domain. However, it is not yet completely clear whether numerical simulations can provide an accurate quantitative estimate of experimental results. Here the potential Euler equations are used to assess the ability of numerical models to describe the evolution of statistical properties of mechanically generated directional, random wave fields and in particular the evolution of the kurtosis. Results show that simulations provide a good quantitative estimate of experimental observations within a broad range of wave directional width.
APA, Harvard, Vancouver, ISO, and other styles
6

Fouques, Sébastien, and Csaba Pákozdi. "A Numerical Investigation of Steep Irregular Wave Properties With a Mixed-Eulerian Lagrangian HOS Method." In ASME 2020 39th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/omae2020-18216.

Full text
Abstract:
Abstract The design of structures at sea requires knowledge on how large and steep waves can be. Although extreme waves are very rare, their consequences in terms of structural loads, such as wave impact or ringing, are critical. However, modelling the physical properties of steep waves along with their probability of occurrence in given sea states has remained a challenge. On the one hand, standard linear and weakly nonlinear wave theories are computationally efficient, but since they assume that the steepness parameter is small, they are unable to capture extreme waves. On the other hand, recent simulation methods based on CFD or fully nonlinear potential solvers are able to capture the physics of steep waves up to the onset on breaking, but their large computational cost makes it difficult to investigate rare events. Between these two extremes, the High-Order Spectral (HOS) method, which solves surface equations, is both efficient and able to capture highly nonlinear effects. It may then represent a good compromise for long simulations of steep waves. Unfortunately, it is based on a perturbation expansion where the small parameter is the wave steepness, and consequently, simulations tend to become unstable when steep wave events occur. In this work, we investigate the properties of irregular waves simulated with a modified HOS method, in which the sea surface is described with a Lagrangian representation, i.e. by computing the position and the velocity potential of individual surface particles. By doing so, nonlinear properties of the surface elevation are simply captured by the modulation of the horizontal and vertical particle motion. The same steep wave is then described more linearly with a Lagrangian representation, which reduces the instabilities of the HOS method. The paper focuses on bi-chromatic waves and irregular waves simulated from a JONSWAP spectrum. We compare simulations performed with the standard HOS and the modified Lagrangian methods for various HOS-orders.
APA, Harvard, Vancouver, ISO, and other styles
7

Guthrie, Richard, and Emma Reid. "Estimating Landslide Induced Probability of Failure to Pipelines Using a Structured Reductionist Approach." In 2018 12th International Pipeline Conference. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/ipc2018-78157.

Full text
Abstract:
Much of North America, and indeed much of the global landscape, is comprised of either locally or regionally steep slopes, river valleys, and weak or unstable geology. Landslides and ground movements continue to impact pipelines that traverse these regions. Pipeline integrity management programs (IMP’s) are increasingly expecting quantitative estimates of ground movement or pipe failure as part of pipeline risk management systems. Quantitative analysis usually relies on one or more of statistics, physical models, and expert judgment. Statistics incorporate ground and pipe behavior (for hazard and vulnerability respectively) over a broad area to infer local probabilities. They carry the weight of big data, but the local application is almost certainly incorrect (variability even for regions exceeds 2 orders of magnitude). Detailed geotechnical (hazard) and soil-pipe interaction and stress (vulnerability) models provide rigorous results, but require substantial effort and/or expert judgment to parameterize the inputs and boundary conditions. We present herein a structured tool to calculate probability of failure (PoF) using expert judgment supported by known, instrumented or observable conditions and statistics (where available). We provide a series of tables used as a basis for nodal calculations along a branch path of a decision tree, and discuss the challenges and results from actual application to over 100 sites in the Interior Plains. The method is intended to be a practical informative approach based on, and limited by, data inputs. It is a flexible fit for purpose assessment that takes advantage of the best available data, however, the method relies on the user to articulate a level of confidence in, or the basis of the results.
APA, Harvard, Vancouver, ISO, and other styles
8

Johannessen, Thomas B., and Øystein Lande. "Long Term Analysis of Steep and Breaking Wave Properties by Event Matching." In ASME 2018 37th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/omae2018-78283.

Full text
Abstract:
Offshore structures are typically required to withstand extreme and abnormal load effects with annual probabilities of occurrence of 10−2 and 10−4 respectively. For linear or weakly nonlinear problems, the load effects with the prescribed annual probabilities of occurrence are typically estimated as a relatively rare occurrence in the short term distribution of 100 year and 10 000 year seastates. For strongly nonlinear load effects, it is not given that an extreme seastate can be used reliably to estimate the characteristic load effect. The governing load may occur as an extremely rare event in a much lower seastate. In attempting to model the load effect in an extreme seastate, the relevant short term probability level is not known nor is it known whether the physics of the wave loading is captured correctly in an extreme seastate. Examples of such strongly nonlinear load effects are slamming loads on large volume offshore structures or wave in deck loads on jacket structures subject to seabed subsidence. The present paper is concerned with the long term distribution of strongly nonlinear load effects and a methodology is proposed which incorporates CFD analysis in a long term Monte Carlo analysis of crest elevations and wave kinematics. Based on a long term time domain simulation of a linear surface elevation, a selection of events is run in CFD in order to obtain a database of linear and corresponding fully nonlinear wave fields with the possibility of wave breaking included. In the subsequent long term analysis, a large linear event is then replaced by the closest matching event in the database. A technique is developed to Froude scale the database results and shift the origin in time and plane so that the database of typically only 100 events give a close match to all the events in the simulation. The method is applied to the simple case of drag loading on a cylinder which is truncated above the still water level such that only the largest waves impact with the structure. It is observed that whereas the Event Matching method agree well with a second order model for return periods lower than 100 years, the loading on the cylinder is significantly larger for longer return periods. The deviation is caused by the increasing dominance of wave braking in the largest crest and illustrates the importance of incorporating wave breaking in the analysis of wave in deck loading problems.
APA, Harvard, Vancouver, ISO, and other styles
9

Tromans, Peter, Luc Vanderschuren, and Kevin Ewans. "The Second Order Statistics of High Waves in Wind Sea and Swell." In ASME 2007 26th International Conference on Offshore Mechanics and Arctic Engineering. ASMEDC, 2007. http://dx.doi.org/10.1115/omae2007-29676.

Full text
Abstract:
The statistics of extreme wave crest elevation and wave height have been calculated for realistic, directionally spread sea and swell using a probabilistic method tested and described previously. The non-linearity of steep waves is modelled to second order using Sharma and Dean kinematics and a response surface (reliability type) method is used to deduce the crest elevation or wave height corresponding to a given probability of exceedance. The effects of various combinations of sea and swell are evaluated. As expected, in all cases, non-linearity makes extreme crests higher than the corresponding linear ones. The non-linear effects on wave height are relatively small.
APA, Harvard, Vancouver, ISO, and other styles
10

Lian, Gunnar, Sverre Haver, and Chris Swan. "Estimation of Design Slamming Loads From Laboratory Test of a Circular Cylinder." In ASME 2022 41st International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/omae2022-80945.

Full text
Abstract:
Abstract Slamming from steep breaking waves can give high impact loads, both globally and locally. In this study we quantify the difference between slamming loads in long-crested and short-crested seas by performing model tests in 11 different sea states. The q-probability slamming load is estimated from long-term analysis by utilizing a modified version of the all-sea state approach where we integrate over a limited domain of sea states. The selected domain includes the sea states that contribute most to the characteristic design value. This method is convenient if the response needs to be estimated by model testing in a limited number of sea states.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "STEEP PROBABILITY"

1

Mauney, Moghissi, and Sridhar. L51907 Internal Corrosion Risk Assessment and Management of Steel Pipelines. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), October 2001. http://dx.doi.org/10.55274/r0011241.

Full text
Abstract:
The objective of this work is to provide a risk assessment and management software tool that will: (i) Provide probability of pipeline segment leak versus time for each pipeline segment of concern based on past laboratory research, (ii) Provide a means of conducting a formal interview for Probability of Leak where data is not available, and (iii) Provide an analysis tool for timing inspection/repair/replacement of each pipeline segment with a maximized Net Present Value (NPV) for the group.
APA, Harvard, Vancouver, ISO, and other styles
2

Campbell, Leslie E., Luke R. Snyder, Julie M. Whitehead, Robert J. Connor, and Jason B. Lloyd. Probability of Detection Study for Visual Inspection of Steel Bridges: Volume 1—Executive Summary. Purdue University, 2019. http://dx.doi.org/10.5703/1288284317103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Campbell, Leslie E., Luke R. Snyder, Julie M. Whitehead, and Robert J. Connor. Probability of Detection Study for Visual Inspection of Steel Bridges: Volume 2—Full Project Report. Purdue University, 2019. http://dx.doi.org/10.5703/1288284317104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

de Luis, Mercedes, Emilio Rodríguez, and Diego Torres. Machine learning applied to active fixed-income portfolio management: a Lasso logit approach. Madrid: Banco de España, September 2023. http://dx.doi.org/10.53479/33560.

Full text
Abstract:
The use of quantitative methods constitutes a standard component of the institutional investors’ portfolio management toolkit. In the last decade, several empirical studies have employed probabilistic or classification models to predict stock market excess returns, model bond ratings and default probabilities, as well as to forecast yield curves. To the authors’ knowledge, little research exists into their application to active fixed-income management. This paper contributes to filling this gap by comparing a machine learning algorithm, the Lasso logit regression, with a passive (buy-and-hold) investment strategy in the construction of a duration management model for high-grade bond portfolios, specifically focusing on US treasury bonds. Additionally, a two-step procedure is proposed, together with a simple ensemble averaging aimed at minimising the potential overfitting of traditional machine learning algorithms. A method to select thresholds that translate probabilities into signals based on conditional probability distributions is also introduced.
APA, Harvard, Vancouver, ISO, and other styles
5

Clausen, Jay, Michael Musty, Anna Wagner, Susan Frankenstein, and Jason Dorvee. Modeling of a multi-month thermal IR study. Engineer Research and Development Center (U.S.), July 2021. http://dx.doi.org/10.21079/11681/41060.

Full text
Abstract:
Inconsistent and unacceptable probability of detection (PD) and false alarm rates (FAR) due to varying environmental conditions hamper buried object detection. A 4-month study evaluated the environmental parameters impacting standoff thermal infra-red(IR) detection of buried objects. Field observations were integrated into a model depicting the temporal and spatial thermal changes through a 1-week period utilizing a 15-minute time-step interval. The model illustrates the surface thermal observations obtained with a thermal IR camera contemporaneously with a 3-d presentation of subsurface soil temperatures obtained with 156 buried thermocouples. Precipitation events and subsequent soil moisture responses synchronized to the temperature data are also included in the model simulation. The simulation shows the temperature response of buried objects due to changes in incoming solar radiation, air/surface soil temperature changes, latent heat exchange between the objects and surrounding soil, and impacts due to precipitation/changes in soil moisture. Differences are noted between the thermal response of plastic and metal objects as well as depth of burial below the ground surface. Nearly identical environmental conditions on different days did not always elicit the same spatial thermal response.
APA, Harvard, Vancouver, ISO, and other styles
6

Bedoya-Maya, Felipe, Lynn Scholl, Orlando Sabogal-Cardona, and Daniel Oviedo. Who uses Transport Network Companies?: Characterization of Demand and its Relationship with Public Transit in Medellín. Inter-American Development Bank, September 2021. http://dx.doi.org/10.18235/0003621.

Full text
Abstract:
Transport Network Companies (TNCs) have become a popular alternative for mobility due to their ability to provide on-demand flexible mobility services. By offering smartphone-based, ride-hailing services capable of satisfying specific travel needs, these modes have transformed urban mobility worldwide. However, to-date, few studies have examined the impacts in the Latin American context. This analysis is a critical first step in developing policies to promote efficient and sustainable transport systems in the Latin-American region. This research examines the factors affecting the adoption of on-demand ride services in Medellín, Colombia. It also explores whether these are substituting or competing with public transit. First, it provides a descriptive analysis in which we relate the usage of platform-based services with neighborhood characteristics, socioeconomic information of individuals and families, and trip-level details. Next, factors contributing to the election of platform-based services modeled using discrete choice models. The results show that wealthy and highly educated families with low vehicle availability are more likely to use TNCs compared to other groups in Medellín. Evidence also points at gender effects, with being female significantly increasing the probability of using a TNC service. Finally, we observe both transit complementary and substitution patterns of use, depending on the context and by whom the service is requested.
APA, Harvard, Vancouver, ISO, and other styles
7

Weller, Joel I., Ignacy Misztal, and Micha Ron. Optimization of methodology for genomic selection of moderate and large dairy cattle populations. United States Department of Agriculture, March 2015. http://dx.doi.org/10.32747/2015.7594404.bard.

Full text
Abstract:
The main objectives of this research was to detect the specific polymorphisms responsible for observed quantitative trait loci and develop optimal strategies for genomic evaluations and selection for moderate (Israel) and large (US) dairy cattle populations. A joint evaluation using all phenotypic, pedigree, and genomic data is the optimal strategy. The specific objectives were: 1) to apply strategies for determination of the causative polymorphisms based on the “a posteriori granddaughter design” (APGD), 2) to develop methods to derive unbiased estimates of gene effects derived from SNP chips analyses, 3) to derive optimal single-stage methods to estimate breeding values of animals based on marker, phenotypic and pedigree data, 4) to extend these methods to multi-trait genetic evaluations and 5) to evaluate the results of long-term genomic selection, as compared to traditional selection. Nearly all of these objectives were met. The major achievements were: The APGD and the modified granddaughter designs were applied to the US Holstein population, and regions harboring segregating quantitative trait loci (QTL) were identified for all economic traits of interest. The APGD was able to find segregating QTL for all the economic traits analyzed, and confidence intervals for QTL location ranged from ~5 to 35 million base pairs. Genomic estimated breeding values (GEBV) for milk production traits in the Israeli Holstein population were computed by the single-step method and compared to results for the two-step method. The single-step method was extended to derive GEBV for multi-parity evaluation. Long-term analysis of genomic selection demonstrated that inclusion of pedigree data from previous generations may result in less accurate GEBV. Major conclusions are: Predictions using single-step genomic best linear unbiased prediction (GBLUP) were the least biased, and that method appears to be the best tool for genomic evaluation of a small population, as it automatically accounts for parental index and allows for inclusion of female genomic information without additional steps. None of the methods applied to the Israeli Holstein population were able to derive GEBV for young bulls that were significantly better than parent averages. Thus we confirm previous studies that the main limiting factor for the accuracy of GEBV is the number of bulls with genotypes and progeny tests. Although 36 of the grandsires included in the APGD were genotyped for the BovineHDBeadChip, which includes 777,000 SNPs, we were not able to determine the causative polymorphism for any of the detected QTL. The number of valid unique markers on the BovineHDBeadChip is not sufficient for a reasonable probability to find the causative polymorphisms. Complete resequencing of the genome of approximately 50 bulls will be required, but this could not be accomplished within the framework of the current project due to funding constraints. Inclusion of pedigree data from older generations in the derivation of GEBV may result is less accurate evaluations.
APA, Harvard, Vancouver, ISO, and other styles
8

Lee, W. S., Victor Alchanatis, and Asher Levi. Innovative yield mapping system using hyperspectral and thermal imaging for precision tree crop management. United States Department of Agriculture, January 2014. http://dx.doi.org/10.32747/2014.7598158.bard.

Full text
Abstract:
Original objectives and revisions – The original overall objective was to develop, test and validate a prototype yield mapping system for unit area to increase yield and profit for tree crops. Specific objectives were: (1) to develop a yield mapping system for a static situation, using hyperspectral and thermal imaging independently, (2) to integrate hyperspectral and thermal imaging for improved yield estimation by combining thermal images with hyperspectral images to improve fruit detection, and (3) to expand the system to a mobile platform for a stop-measure- and-go situation. There were no major revisions in the overall objective, however, several revisions were made on the specific objectives. The revised specific objectives were: (1) to develop a yield mapping system for a static situation, using color and thermal imaging independently, (2) to integrate color and thermal imaging for improved yield estimation by combining thermal images with color images to improve fruit detection, and (3) to expand the system to an autonomous mobile platform for a continuous-measure situation. Background, major conclusions, solutions and achievements -- Yield mapping is considered as an initial step for applying precision agriculture technologies. Although many yield mapping systems have been developed for agronomic crops, it remains a difficult task for mapping yield of tree crops. In this project, an autonomous immature fruit yield mapping system was developed. The system could detect and count the number of fruit at early growth stages of citrus fruit so that farmers could apply site-specific management based on the maps. There were two sub-systems, a navigation system and an imaging system. Robot Operating System (ROS) was the backbone for developing the navigation system using an unmanned ground vehicle (UGV). An inertial measurement unit (IMU), wheel encoders and a GPS were integrated using an extended Kalman filter to provide reliable and accurate localization information. A LiDAR was added to support simultaneous localization and mapping (SLAM) algorithms. The color camera on a Microsoft Kinect was used to detect citrus trees and a new machine vision algorithm was developed to enable autonomous navigations in the citrus grove. A multimodal imaging system, which consisted of two color cameras and a thermal camera, was carried by the vehicle for video acquisitions. A novel image registration method was developed for combining color and thermal images and matching fruit in both images which achieved pixel-level accuracy. A new Color- Thermal Combined Probability (CTCP) algorithm was created to effectively fuse information from the color and thermal images to classify potential image regions into fruit and non-fruit classes. Algorithms were also developed to integrate image registration, information fusion and fruit classification and detection into a single step for real-time processing. The imaging system achieved a precision rate of 95.5% and a recall rate of 90.4% on immature green citrus fruit detection which was a great improvement compared to previous studies. Implications – The development of the immature green fruit yield mapping system will help farmers make early decisions for planning operations and marketing so high yield and profit can be achieved.
APA, Harvard, Vancouver, ISO, and other styles
9

Steudlein, Armin, Besrat Alemu, T. Matthew Evans, Steven Kramer, Jonathan Stewart, Kristin Ulmer, and Katerina Ziotopoulou. PEER Workshop on Liquefaction Susceptibility. Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA, May 2023. http://dx.doi.org/10.55461/bpsk6314.

Full text
Abstract:
Seismic ground failure potential from liquefaction is generally undertaken in three steps. First, a susceptibility evaluation determines if the soil in a particular layer is in a condition where liquefaction triggering could potentially occur. This is followed by a triggering evaluation to estimate the likelihood of triggering given anticipated seismic demands, environmental conditions pertaining to the soil layer (e.g., its depth relative to the ground water table), and the soil state. For soils where triggering can be anticipated, the final step involves assessments of the potential for ground failure and its impact on infrastructure systems. This workshop was dedicated to the first of these steps, which often plays a critical role in delineating risk for soil deposits with high fines contents and clay-silt-sand mixtures of negligible to moderate plasticity. The workshop was hosted at Oregon State University on September 8-9, 2022 and was attended by 49 participants from the research, practice, and regulatory communities. Through pre-workshop polls, extended abstracts, workshop presentations, and workshop breakout discussions, it was demonstrated that leaders in the liquefaction community do not share a common understanding of the term “susceptibility” as applied to liquefaction problems. The primary distinction between alternate views concerns whether environmental conditions and soil state provide relevant information for a susceptibility evaluation, or if susceptibility is a material characteristic. For example, a clean, dry, dense sand in a region of low seismicity is very unlikely to experience triggering of liquefaction and would be considered not susceptible by adherents of a definition that considers environmental conditions and state. The alternative, and recommended, definition focusing on material susceptibility would consider the material as susceptible and would defer consideration of saturation, state, and loading effects to a separate triggering analysis. This material susceptibility definition has the advantage of maintaining a high degree of independence between the parameters considered in the susceptibility and triggering phases of the ground failure analysis. There exist differences between current methods for assessing material susceptibility – the databases include varying amount of test data, the materials considered are distinct (from different regions) and have been tested using different procedures, and the models can be interpreted as providingdifferent outcomes in some cases. The workshop reached a clear consensus that new procedures are needed that are developed using a new research approach. The recommended approach involves assembling a database of information from sites for which in situ test data are available (borings with samples, CPTs), cyclic test data are available from high-quality specimens, and a range of index tests are available for important layers. It is not necessary that the sites have experienced earthquake shaking for which field performance is known, although such information is of interest where available. A considerable amount of data of this type are available from prior research studies and detailed geotechnical investigations for project sites by leading geotechnical consultants. Once assembled and made available, this data would allow for the development of models to predict the probability of material susceptibility given various independent variables (e.g., in-situ tests indices, laboratory index parameters) and the epistemic uncertainty of the predictions. Such studies should be conducted in an open, transparent manner utilizing a shared database, which is a hallmark of the Next Generation Liquefaction (NGL) project.
APA, Harvard, Vancouver, ISO, and other styles
10

L51860 Pipeline Design for Mechanical Damage. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), January 2001. http://dx.doi.org/10.55274/r0010145.

Full text
Abstract:
Mechanical damage by third-party excavation is the leading cause of pipeline failures in North America and Europe, yet pipeline design standards are mainly based on resistance to internal pressure and do not explicitly address mechanical damage. Without a design method that takes into account the likelihood and consequences of mechanical damage, the risk (referring to the total risk to life safety herein) implied by current design practices is not uniform for different pipelines with respect to mechanical damage. A reliability-based method provides a means to ensure that risks implied by the design are acceptable and uniform. To achieve this goal, the design criteria presented in this report were calibrated to meet a set of target reliabilities that were selected by a two-step approach. In the first step, the average target reliability level was set to match the overall historical level, on the basis that gas pipelines in general have a satisfactory safety record. The second step involved varying the target reliability levels in inverse proportion to the severity of failure consequences so as to maintain consistency of risk. The consequences, and hence the target reliability levels, were functions of pipeline diameter, operating pressure and location class. The tool used to develop the design check is a probabilistic model that relates the probability of failure due to mechanical damage to the design parameters (e.g., diameter, wall thickness, steel grade, pressure and location class) and the preventative measures put in place (e.g., burial depth, response time to notification, level of site-supervision for excavation, and frequency of right-of-way patrol). The model has two essential components: a fault tree that estimates interference frequency based on land use type and preventative measures, and a structural reliability model that estimates the probability of puncture or burst due to a gouged dent for a given interference event. The overall probabilistic framework is based on the Monte-Carlo simulation method. In predicting failure probabilities, the uncertainties in load and resistance variables and model errors were taken into consideration.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography