Littérature scientifique sur le sujet « Minimum P-value approach »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Minimum P-value approach ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Minimum P-value approach"

1

Ogura, Toru, et Chihiro Shiraishi. « Cutoff Value for Wilcoxon-Mann-Whitney Test by Minimum P-value : Application to COVID-19 Data ». International Journal of Statistics and Probability 11, no 3 (9 mars 2022) : 1. http://dx.doi.org/10.5539/ijsp.v11n3p1.

Texte intégral
Résumé :
Dependent and independent variables may appear uncorrelated when analyzed in full range in medical data. However, when an independent variable is divided by the cutoff value, the dependent and independent variables may become correlated in each group. Furthermore, researchers often convert independent variables of quantitative data into binary data by cutoff value and perform statistical analysis with the data. Therefore, it is important to select the optimum cutoff value since performing statistical analysis depends on the cutoff value. Our study determines the optimal cutoff value when the data of dependent and independent variables are quantitative. The piecewise linear regression analysis divides an independent variable into two by the cutoff value, and linear regression analysis is performed in each group. However, the piecewise linear regression analysis may not obtain the optimal cutoff value when data follow a non-normal distribution. Unfortunately, medical data often follows a non-normal distribution. We, therefore, performed theWilcoxon-Mann-Whitney (WMW) test with two-sided for all potential cutoff values and adopted the cutoff value that minimizes the P-value (called minimum P-value approach). Calculating the cutoff value using the minimum P-value approach is often used in the log-rank and chi-squared test but not the WMW test. First, using Monte Carlo simulations at various settings, we verified the performance of the cutoff value for the WMW test by the minimum P-value approach. Then, COVID-19 data were analyzed to demonstrate the practical applicability of the cutoff value.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Lin, Yunzhi. « Robust inference for responder analysis : Innovative clinical trial design using a minimum p-value approach ». Contemporary Clinical Trials Communications 3 (août 2016) : 65–69. http://dx.doi.org/10.1016/j.conctc.2016.04.001.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Ogura, T., et C. Shiraishi. « Search method for two cutoff values in clinical trial using Kruskal-Wallis test by minimum P-value approach ». Journal of Applied Mathematics, Statistics and Informatics 18, no 2 (1 décembre 2022) : 19–32. http://dx.doi.org/10.2478/jamsi-2022-0010.

Texte intégral
Résumé :
Abstract In clinical trials, age is often converted to binary data by the cutoff value. However, when looking at a scatter plot for a group of patients whose age is larger than or equal to the cutoff value, age and outcome may not be related. If the group whose age is greater than or equal to the cutoff value is further divided into two groups, the older of the two groups may appear to be at lower risk. In this case, it may be necessary to further divide the group of patients whose age is greater than or equal to the cutoff value into two groups. This study provides a method for determining which of the two or three groups is the best split. The following two methods are used to divide the data. The existing method, the Wilcoxon-Mann-Whitney test by minimum P-value approach, divides data into two groups by one cutoff value. A new method, the Kruskal-Wallis test by minimum P-value approach, divides data into three groups by two cutoff values. Of the two tests, the one with the smaller P-value is used. Because this was a new decision procedure, it was tested using Monte Carlo simulations (MCSs) before application to the available COVID-19 data. The MCS results showed that this method performs well. In the COVID-19 data, it was optimal to divide into three groups by two cutoff values of 60 and 70 years old. By looking at COVID-19 data separated into three groups according to the two cutoff values, it was confirmed that each group had different features. We provided the R code that can be used to replicate the results of this manuscript. Another practical example can be performed by replacing x and y with appropriate ones.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Page, Robert, et Eiki Satake. « Beyond P values and Hypothesis Testing : Using the Minimum Bayes Factor to Teach Statistical Inference in Undergraduate Introductory Statistics Courses ». Journal of Education and Learning 6, no 4 (2 août 2017) : 254. http://dx.doi.org/10.5539/jel.v6n4p254.

Texte intégral
Résumé :
While interest in Bayesian statistics has been growing in statistics education, the treatment of the topic is still inadequate in both textbooks and the classroom. Because so many fields of study lead to careers that involve a decision-making process requiring an understanding of Bayesian methods, it is becoming increasingly clear that Bayesian methods should be included in classes that cover the P value and Hypothesis Testing. We discuss several fallacies associated with the P value and Hypothesis Testing, including why Fisher’s P value and Neyman-Pearson’s Hypothesis Tests are incompatible with each other and cannot be combined to answer the question “What is the probability of the truth of one’s belief based on the evidence?” We go on to explain how the Minimum Bayes Factor can be used as an alternative to frequentist methods, and why the Bayesian approach results in more accurate, credible, and relevant test results when measuring the strength of the evidence. We conclude that educators must realize the importance of teaching the correct interpretation of Fisher’s P value and its alternative, the Bayesian approach, to students in an introductory statistics course.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Danko, Jakub, Vincent Šoltés et Tomáš Bindzár. « New Approach to Portfolio Creation Using the Minimum Spanning Tree Theory and Its Robust Evaluation ». Quality Innovation Prosperity 24, no 2 (31 juillet 2020) : 22. http://dx.doi.org/10.12776/qip.v24i2.1450.

Texte intégral
Résumé :
<p><strong>Purpose:</strong> The aim of this paper is to describe another possibility of portfolio creation using the minimum spanning tree method. The research contributes to the existing body of knowledge with using and subsequently developing a new approach based on graph theory, which is suitable for an individual investor who wants to create an investment portfolio.</p><p><strong>Methodology/Approach:</strong> The analyzed data is divided into two (disjoint) sets – a training and a testing set. Portfolio comparisons were carried out during the test period, which always followed immediately after the training period and had a length of one year. For the sake of objectivity of the comparison, all proposed portfolios always consist of ten shares of equal weight.</p><p><strong>Findings:</strong> Based on the results from the analysis, we can see that our proposed method offers (on average) the best appreciation of the invested resources and also the least risky investment in terms of relative variability, what could be considered as very attractive from an individual investor’s point of view.</p><p><strong>Research Limitation/implication:</strong> In our paper, we did not consider any fees related to the purchase and holding of financial instruments in the portfolio. For periods with extreme market returns (sharp increase or decrease), the use of Pearson’s correlation coefficient is not appropriate.</p><strong>Originality/Value of paper:</strong> The main practical benefit of the research is that it presents and offers an interesting and practical investment strategy for an individual investor who wants to take an active approach to investment.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Wilks, D. S. « On “Field Significance” and the False Discovery Rate ». Journal of Applied Meteorology and Climatology 45, no 9 (1 septembre 2006) : 1181–89. http://dx.doi.org/10.1175/jam2404.1.

Texte intégral
Résumé :
Abstract The conventional approach to evaluating the joint statistical significance of multiple hypothesis tests (i.e., “field,” or “global,” significance) in meteorology and climatology is to count the number of individual (or “local”) tests yielding nominally significant results and then to judge the unusualness of this integer value in the context of the distribution of such counts that would occur if all local null hypotheses were true. The sensitivity (i.e., statistical power) of this approach is potentially compromised both by the discrete nature of the test statistic and by the fact that the approach ignores the confidence with which locally significant tests reject their null hypotheses. An alternative global test statistic that has neither of these problems is the minimum p value among all of the local tests. Evaluation of field significance using the minimum local p value as the global test statistic, which is also known as the Walker test, has strong connections to the joint evaluation of multiple tests in a way that controls the “false discovery rate” (FDR, or the expected fraction of local null hypothesis rejections that are incorrect). In particular, using the minimum local p value to evaluate field significance at a level αglobal is nearly equivalent to the slightly more powerful global test based on the FDR criterion. An additional advantage shared by Walker’s test and the FDR approach is that both are robust to spatial dependence within the field of tests. The FDR method not only provides a more broadly applicable and generally more powerful field significance test than the conventional counting procedure but also allows better identification of locations with significant differences, because fewer than αglobal × 100% (on average) of apparently significant local tests will have resulted from local null hypotheses that are true.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Torres-Mantilla, H. A., L. Cuesta-Herrera, J. E. Andrades-Grassi et G. Bianchi. « Simulation of inference test performance for minimum inhibitory concentration censored data ». Journal of Physics : Conference Series 2153, no 1 (1 janvier 2022) : 012013. http://dx.doi.org/10.1088/1742-6596/2153/1/012013.

Texte intégral
Résumé :
Abstract The estimation of the minimum inhibitory concentration is usually performed by a method of serial dilutions by a factor of 2, introducing the overestimation of antimicrobial efficacy, quantified by a simulation model that shows that the variability of the bias is higher for the standard deviation, being dependent on the metric distance to the values of the concentrations used. We use a methodological approach through modeling and simulation for the measurement error of physical variables with censored information, proposing a new inference method based on the calculation of the exact probability for the set of possible samples from nmeasurements that allows quantifying the p-value in one or two independent sample tests for the comparison of censored data means. Tests based on exact probability methods offer a reasonable solution for small sample sizes, with statistical power varying according to the hypothesis evaluated, providing insight into the limitations of censored data analysis and providing a tool for decision making in the diagnosis of antimicrobial efficacy.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Özdalyan, Fırat, Hikmet Gümüş, Celal Gençoğlu, Mert Tunar, Caner Çetinkaya et Berkant Muammer Kayatekin. « Comparison of the biomechanical parameters during drop jump on compliant and noncompliant surfaces : A new methodological approach ». Turkish Journal of Sports Medicine 57, no 1 (11 décembre 2021) : 15–20. http://dx.doi.org/10.47447/tjsm.0553.

Texte intégral
Résumé :
Objective: Bilateral plyometric training of the lower extremities has been shown to provide improvement in vertical force production. However, designing a proper plyometric training program and choosing the appropriate surface is critical, otherwise the risk of injury and lower extremity joint pathologies increases. The aim of this study was to compare biomechanical parameters between mini-trampoline and noncompliant surface during drop jumping. Materials and Methods: Thirty-four male adults participated in the study. Active markers were placed on the left knee, ankle and hip joints of the participants. Also, a force sensing resistor was placed under the participants’ left shoes. During drop jumping, the knee joint angles were recorded by the camera while a data set of reaction forces and loading rates were collected using a force sensing resistor. Data were compared with paired samples T-test. The level of significance was set at p ≤ 0.05. Results: The mean values of maximum reaction forces and loading rates were greater on the noncompliant surface (p < 0.001). Mean knee joint angles for frame at which the knee angle is minimum and the frames one before and one after the frame at which the minimum value is obtained were similar between surfaces, however, were found to be smaller on noncompliant surface for the remaining eight frames (p < 0.05). Conclusion: This study indicates that the range of bending values in the knee joint is greater on noncompliant surface compared to mini-trampoline during drop jump. Since the mini-trampoline resulted in lower reaction forces and loading rates, it can be used as an exercise equipment to minimize the injury risk of plyometric training.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Ma, Chenchen, et Shihong Yue. « Minimum Sample Size Estimate for Classifying Invasive Lung Adenocarcinoma ». Applied Sciences 12, no 17 (24 août 2022) : 8469. http://dx.doi.org/10.3390/app12178469.

Texte intégral
Résumé :
Statistical Learning Theory (SLT) plays an important role in prediction estimation and machine learning when only limited samples are available. At present, determining how many samples are necessary under given circumstances for prediction accuracy is still an unknown. In this paper, the medical diagnosis on lung cancer is taken as an example to solve the problem. Invasive adenocarcinoma (IA) is a main type of lung cancer, often presented as ground glass nodules (GGNs) in patient’s CT images. Accurately discriminating IA from non-IA based on GGNs has important implications for taking the right approach to treatment and cure. Support Vector Machine (SVM) is an SLT application and is used to classify GGNs, wherein the interrelation between the generalization and the lower bound of necessary sampling numbers can be effectively recovered. In this research, to validate the interrelation, 436 GGNs were collected and labeled using surgical pathology. Then, a feature vector was constructed for each GGN sample through the fully connected layer of AlexNet. A 10-dimensional feature subset was then selected with the p-value calculated using Analysis of Variance (ANOVA). Finally, four sets with different sample sizes were used to construct an SVM classifier. Experiments show that a theoretical estimate of minimum sample size is consistent with actual values, and the lower bound on sample size can be solved under various generalization requirements.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Wahyuningsih, Trionika Dian, Sri Sulistijowati Handajani et Diari Indriati. « Penerapan Generalized Cross Validation dalam Model Regresi Smoothing Spline pada Produksi Ubi Jalar di Jawa Tengah ». Indonesian Journal of Applied Statistics 1, no 2 (13 mars 2019) : 117. http://dx.doi.org/10.13057/ijas.v1i2.26250.

Texte intégral
Résumé :
<p>Sweet Potato is a useful plant as a source carbohydrates, proteins, and is used as an animal feed and ingredient industry. Based on data from the Badan Pusat Statistik (BPS), the production fluctuations of the sweet potato in Central Java from year to year are caused by many factor. The production of sweet potato and the factors that affected it if they are described into a pattern of relationships then they do not have a specific pattern and do not follow a particular distribution, such as harvest area, the allocation of subsidized urea fertilizer, and the allocation of subsidized organic fertilizer. Therefore, the production model of sweet potato could be applied into nonparametric regression model. The approach used for nonparametric regression in this study is smoothing spline regression. The method used in regression smoothing spline is generalized cross validation (GCV). The value of the smoothing parameter (λ) is chosen from the minimum GCV value. The results of the study show that the optimum λ value for the factors of harvest area, urea fertilizer and organic fertilizer are 5.57905e-14, 2.51426e-06, and 3.227217e-13 that they result a minimum GCV i.e 2.29272e-21, 1.38391e-16, and 3.46813e-24.</p><p> </p><p><strong>Keywords</strong>: Sweet potato; nonparametric; smoothing spline; generalized cross validation.</p>
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Minimum P-value approach"

1

ROTA, MATTEO. « Cut-pont finding methods for continuous biomarkers ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2013. http://hdl.handle.net/10281/40114.

Texte intégral
Résumé :
My PhD dissertation deals with statistical methods for cut-point finding for continuous biomarkers. Categorization is often needed for clinical decision making when dealing with diagnostic (or prognostic) biomarkers and a dichotomous or censored failure time outcome. This allows the definition of two or more prognostic risk groups, or also patient’s stratifications for inclusion in randomized clinical trials (RCTs). We investigate the following cut-point finding methods: minimum P-value, Youden index, concordance probability and point closest to-(0,1) corner in the ROC plane. We compare them by assuming both Normal and Gamma biomarker distributions, showing whether they lead to the identification of the same true cut-point and further investigating their performance by simulation. Within the framework of censored survival data, we will consider here new estimation approaches of the optimal cut-point, which use a conditional weighting method to estimate the true positive and false positive fractions. Motivating examples on real datasets are discussed within the dissertation for both the dichotomous and censored failure time outcome. In all simulation scenarios, the point closest-to-(0,1) corner in the ROC plane and concordance probability approaches outperformed the other methods. Both these methods showed good performance in the estimation of the optimal cut-point of a biomarker. However, to improve results communicability, the Youden index or the concordance probability associated to the estimated cut-point could be reported to summarize the associated classification accuracy. The use of the minimum P-value approach for cut-point finding is not recommended because its objective function is computed under the null hypothesis of absence of association between the true disease status and X. This is in contrast with the presence of some discrimination potential of the biomarker X that leads to the dichotomization issue. The investigated cut-point finding methods are based on measures, i.e. sensitivity and specificity, defined conditionally on the outcome. My PhD dissertation opens the question on whether these methods could be applied starting from predictive values, that typically represent the most useful information for clinical decisions on treatments. However, while sensitivity and specificity are invariant to disease prevalence, predictive values vary across populations with different disease prevalence. This is an important drawback of the use of predictive values for cut-point finding. More in general, great care should be taken when establishing a biomarker cut-point for clinical use. Methods for categorizing new biomarkers are often essential in clinical decision-making even if categorization of a continuous biomarker is gained at a considerable loss of power and information. In the future, new methods involving the study of the functional form between the biomarker and the outcome through regression techniques such as fractional polynomials or spline functions should be considered to alternatively define cut-points for clinical use. Moreover, in spite of the aforementioned drawback related to the use of predictive values, we also think that additional new methods for cut-point finding should be developed starting from predictive values.
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Minimum P-value approach"

1

Pavlou, Antonis, Michalis Doumpos et Constantin Zopounidis. « The Robustness of Portfolio Optimization Models ». Dans Advances in Finance, Accounting, and Economics, 210–29. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-6114-9.ch008.

Texte intégral
Résumé :
The optimization of investment portfolios is a topic of major importance in financial decision making, and many relevant models can be found in the literature. These models extend the traditional mean-variance framework using a variety of other risk-return measures. Existing comparative studies have adopted a rather restrictive approach, focusing solely on the minimum risk portfolio without considering the whole set of efficient portfolios, which are also relevant for investors. This chapter focuses on the performance of the whole efficient set. To this end, the authors examine the out-of-sample robustness of efficient portfolios derived by popular optimization models, namely the traditional mean-variance model, mean-absolute deviation, conditional value at risk, and a multi-objective model. Tests are conducted using data for S&P 500 stocks over the period 2005-2016. The results are analyzed through novel performance indicators representing the deviations between historical (estimated) efficient frontiers, actual out-of-sample efficient frontiers, and realized out-of-sample portfolio results.
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Minimum P-value approach"

1

Cahyono, Y. « New Approach in Developing Minimum Facility Platform (MFP) Through Standardization Design in Medco E&P Offshore ». Dans Indonesian Petroleum Association 44th Annual Convention and Exhibition. Indonesian Petroleum Association, 2021. http://dx.doi.org/10.29118/ipa21-f-27.

Texte intégral
Résumé :
The purpose of this paper is to explain a new approach has to be taken by Medco E&P Indonesia in order to meet cost optimization and schedule efficiencies in the development of marginal fields which become necessary to streamline the design process for platforms and employ value engineering consideration. Development of marginal fields using minimum facilities platform rely on existing infrastructure has enabled oil and gas operators to build the minimum platform without the large capital expenditure associated with the new production facilities. This paper will show that minimum facilities platform (MFP) can be selected and developed at relatively low cost and designed to suit the specific topside surface facility and fabrication and/or installation requirements according to weight and quantity of equipment in the platform. The biggest challenges involved in the design of minimum facilities platform are the fields have 300 feet (90 m) water depth in average, environmental conditions and no experience previously to design of minimum facilities platform to cover topside requirements such as drilling, surface facilities and substructure requirements such as mono pile type, jacket types and conductor legs. Classification of the minimum facility platform design concept types and design selection guideline are therefore possible to be quickly developed within a range of structural solutions and topside design which is inherently safe as far as practicable, fit for purpose and in reliable manner. This new approach successfully brings up some minimum facilities platform to be implemented in the marginal fields to extend remaining life of the existing facilities economically and within value engineering manner.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Frieden, B. Roy. « Fisher information, disorder, and optical signal estimation ». Dans OSA Annual Meeting. Washington, D.C. : Optica Publishing Group, 1989. http://dx.doi.org/10.1364/oam.1989.mn6.

Texte intégral
Résumé :
Optical signals may be regarded as probability laws (on photon events). The problem of optical signal recovery may therefore be viewed as a problem in estimating a probability law p(x). Maximum entropy has, in the past, been used for this purpose. Here, we discuss instead the use of minimum Fisher information I = ∫ d x [ p ′ ( x ) ] 2 / p ( x ) . Consider a physical system consisting of a random aggregate of particles or pho tons. Consider the experiment of measuring one particle's coordinate y, and from this, best estimating the mean coordinate θ of the ensemble, where y = θ + x. As time passes, the particles randomly become more spread out, so that the error in the estimate should increase. But the error goes inversely as I. Therefore, I should statistically decrease with time. With no constraint on the system, as t → ∞, I → 0, defining a maximally random law p(x) = constant. However, in the presence of a physical constraint, I should approach a finite value, obeying I = minimum. When the constraint is linear in the mean kinetic energy of the system, the solution p(x) obeys the stationary differential equation for the system. In this way, the Schrodinger (energy) wave equation, Helmholtz wave equation, diffusion equation, and Maxwell-Boltzmann law may be derived from one classical principle of disorder.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Bienvenu, Meghyn, Quentin Manière et Michaël Thomazo. « Cardinality Queries over DL-Lite Ontologies ». Dans Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California : International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/248.

Texte intégral
Résumé :
Ontology-mediated query answering (OMQA) employs structured knowledge and automated reasoning in order to facilitate access to incomplete and possibly heterogeneous data. While most research on OMQA adopts (unions of) conjunctive queries as the query language, there has been recent interest in handling queries that involve counting. In this paper, we advance this line of research by investigating cardinality queries (which correspond to Boolean atomic counting queries) coupled with DL-Lite ontologies. Despite its apparent simplicity, we show that such an OMQA setting gives rise to rich and complex behaviour. While we prove that cardinality query answering is tractable (TC0) in data complexity when the ontology is formulated in DL-Lite-core, the problem becomes coNP-hard as soon as role inclusions are allowed. For DL-Lite-pos-H (which allows only positive axioms), we establish a P-coNP dichotomy and pinpoint the TC0 cases; for DL-Lite-core-H (allowing also negative axioms), we identify new sources of coNP complexity and also exhibit L-complete cases. Interestingly, and in contrast to related tractability results, we observe that the canonical model may not give the optimal count value in the tractable cases, which led us to develop an entirely new approach based upon exploring a space of strategies to determine the minimum possible number of query matches.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Couton, D., M. Hoang-Hong, J. Robert, J. L. Tuhault et S. Doan-Kim. « Experimental Study of Turbulent Local Enhanced Heat Transfer in Wavy-Wall Channel ». Dans ASME/JSME 2007 Thermal Engineering Heat Transfer Summer Conference collocated with the ASME 2007 InterPACK Conference. ASMEDC, 2007. http://dx.doi.org/10.1115/ht2007-32825.

Texte intégral
Résumé :
The aim of this study is to analyze the phenomena of heat transfer enhancement between two periodic sinusoidal walls for a single gas flow. The experimental set-up is characterized by a few geometrical parameters: amplitude of wall waviness (A); channel height (H); fin wavy-channel width (ω); fin period (l) and total wavy length (L). Combination of these ones is reduced to: the wall aspect ratio γ, the cross-section aspect ratio α and the channel spacing ratio ε. The Reynolds number defined on the hydraulic diameter and the bulk velocity is greater than 4000. A constant heat flux is maintained on the second lateral wall. For Re = 5700, we observe an entrance region from the first to the fourth period; beyond, the velocity profiles are autosimilar. A shear layer is generated just downstream of the crest and develops in its wake up to the concavity area. Thermal experimental approach is performed by local measurements of convective heat transfer coefficient along the walls, within the viscous sublayer. The heat transfer profile presents an increasing from the crest of 15%, and the maximum is located at the first quarter of the period, close to the separated point. Beyond, the value of heat transfer decreases of 50% and the minimum is located close to the reattachment point. Then the heat transfer increases up to the next crest. The same phenomenon is observed in the next periods of the channel. To explain theses results, we calculate the turbulence terms obtained from the classical equations of fluid mechanics. The turbulence production (P) presents a maximum in the core of the shear layer, where the Reynolds constraints and the heat transfer are maxima. A good correlation is obtained between turbulence production and heat transfer. The flow pattern (mean, fluctuating and turbulence terms) are performed with PIV technique in order to analyze the vortices that develop in the shear layer, based on 1000 pairs of images.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Weitze, W. F. « Use of Average Temperature in Fen Calculations ». Dans ASME 2020 Pressure Vessels & Piping Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/pvp2020-21009.

Texte intégral
Résumé :
Abstract The effect of the light reactor water environment on fatigue damage is referred to as environmentally assisted fatigue (EAF). This effect is accounted for by applying an environmental fatigue correction factor, Fen, to calculated fatigue usage. In providing guidelines for calculation of Fen, Revision 0 of NUREG/CR-6909 [1] permits temperature averaging for the case of a constant strain rate and linear temperature response, and permits it in other cases as well, but only if the average temperature used produces results that are consistent with the modified rate approach [1, p. A.5]. Revision 1 of NUREG/CR-6909 [2] modifies this slightly, requiring that the threshold temperature be used in averaging instead of the minimum if the minimum is below the threshold [2, p. A-6]. In both cases, the benchmark for accuracy is the modified rate approach [2, Section 4.4]. In this paper, we use real world examples to compare Fen values based on the modified rate approach with those using average strain rate and temperature. We also examine how to select the rise time used to calculate average strain rate in those cases where it is not obvious. We find that temperature averaging is conservative if rise time and other parameters are correctly selected.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Nicol, David, Philip C. Malte, Jenkin Lai, Nick N. Marinov, David T. Pratt et Robert A. Corr. « NOx Sensitivities for Gas Turbine Engines Operated on Lean-Premixed Combustion and Conventional Diffusion Flames ». Dans ASME 1992 International Gas Turbine and Aeroengine Congress and Exposition. American Society of Mechanical Engineers, 1992. http://dx.doi.org/10.1115/92-gt-115.

Texte intégral
Résumé :
NOx exhaust emissions for gas turbine engines with lean-premixed combustors are examined as a function of combustor pressure (P), mean residence time (τ), fuel-air equivalence ratio (φ), and inlet mixture temperature (Ti). The fuel is methane. The study is accomplished through chemical reactor modeling of the combustor, using CH4 oxidation and NOx kinetic mechanisms currently available. The NOx is formed by the Zeldovich, prompt, and nitrous oxide mechanisms. The combustor is assumed to have a uniform φ, and is modeled using two reactors in series. The first reactor is a well-stirred reactor (WSR) operating at incipient extinction. This simulates the initiation and stabilization of the combustion process. The second reactor is a plug-flow reactor (PFR), which simulates the continuation of the combustion process, and permits it to approach completion. For comparison, two variations of this baseline model are also considered. In the first variation, the combustor is modeled by extending the WSR until it fills the whole combustor, thereby eliminating the PFR. In the second variation, the WSR is eliminated, and the combustor is treated as a PFR with recycle. These two variations do not change the NOx values significantly from the results obtained using the baseline model. The pressure sensitivity of the NOx is examined. This is found to be minimum, and essentially nil, when the conditions are P = 1 to 10atm, Ti = 600K, and φ = 0.6. However, when one or more of these parameters increases above the values listed, the NOx dependence on the pressure approaches P raised to a power of 0.4-to-0.6. The source of the NOx is also examined. For the WSR operating at incipient extinction, the NOx is contributed mainly by the prompt and nitrous oxide mechanisms, with the prompt contribution increasing as φ increases. However, for the combustor as a whole, the nitrous oxide mechanism predominates over the prompt mechanism, and for φ of 0.5-to-0.6, competes strongly with the Zeldovich mechanism. For φ greater than 0.6-to-0.7, the Zeldovich mechanism is the predominant source of the NOx for the combustor as a whole. Verification of the model is based on the comparison of its output to results published recently for a methane-fired, porous-plate burner operated with variable P, φ, and Ti. The model shows agreement to these laboratory results within a factor two, with almost exact agreement occurring for the leanest and coolest cases considered. Additionally, comparison of the model to jet-stirred reactor NOx data is shown. Good agreement between the model results and the data is obtained for most of the jet-stirred reactor operating range. However, the NOx predicted by the model exhibits a stronger sensitivity on the combustion temperature than indicated by the jet-stirred reactor data. Although the emphasis of the paper is on lean-premixed combustors, NOx modeling for conventional diffusion-flame combustors is presented in order to provide a complete discussion of NOx for gas turbine engines.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Li, Zhuoran, et Guan Qin. « Pore–Scale Study of Effects of Hydrate Morphologies on Dissociation Evolutions Using Lattice–Boltzmann Method ». Dans Offshore Technology Conference. OTC, 2021. http://dx.doi.org/10.4043/31067-ms.

Texte intégral
Résumé :
Abstract The natural gas hydrate, plentifully distributed in ocean floor sediments and permafrost regions, is considered a promising unconventional energy resource. The investigation of hydrate dissociation mechanisms in porous media is essential to optimize current production methods. To provide a microscopic insight in the hydrate dissociation process, we developed a Lattice Boltzmann (LB) model to investigate this multi–physicochemical process, including mass transfer, conjugate heat transfer, and gas transport. The methane hydrate dissociation is regarded as the reactive transport process coupled with heat transfer. The methane transport in porous media is modelled by the generalized LB method with the Bhatnagar-Gross-Krook (BGK) collision model. The mass transfer from hydrate to fluid phase is described by the hydrate kinetic and thermodynamic models. Finally, the conjugate heat transfer LB-model for heterogeneous media is added for solving the energy equation. In the numerical experiments, we primarily investigated the effects of different hydrate distribution morphologies such as pore–filling, grain–coating, and dispersed on the hydrate dissociation process. From simulations, we found that in general, the dissociation rate and the methane average density rapidly approached the maximum value and then decreased with fluctuation during the dissociation process. This trend is due to that the endothermic reaction heat decreased the temperature, resulting in decelerating the dissociation. The average temperature decreased to minimum value instantaneously as hydrate started to dissociate. After the minimum value, the average temperature would increase slowly, accompanied by the thermal stimulation and hydrate consumption, displaying a valley shape of the temperature curve. We also found that the whole dissociation process and permeability–saturation relations are significantly affected by the hydrate morphologies. Under the same hydrate saturation, the dispersed case dissolves the fastest, whereas the grain–coating case is the slowest. Furthermore, we proposed a general permeability–saturation relation applicable for three cases, filling the gap in the current relative permeability models. The LB model proposed in this study is capable to simulate the complex physicochemical hydrate dissociation process. Considering the impacts of thermodynamic conditions (P,T), we investigated their influences on the coupled interaction between dissociation and seepage under three different morphologies and proposed a general permeability–saturation relationship. The results can be applied as input to adjust parameters in the continuum model, and provide instructions for exploring clean energy with environmental considerations.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Ünalmis, Ö. Haldun. « Downhole Three-Phase Flow Measurement Using Sound Speed Measured by Local or Distributed Acoustic Sensing ». Dans SPE Annual Technical Conference and Exhibition. SPE, 2022. http://dx.doi.org/10.2118/210072-ms.

Texte intégral
Résumé :
Abstract In-well multiphase flow measurement continues to be a challenging task in the oil and gas industry. One promising technology to achieve this goal is the distributed acoustic sensing (DAS) system deployed downhole along a fiber. A DAS system is usually capable of measuring speed of sound (SoS) and, depending on the type of application and how the system is installed/configured, it may also measure flow velocity. In its current state, the DAS technology is still not fully explored in multiphase flow measurement for reasons including but not limited to the lack of flow algorithms and methodologies that can use measurements in a combinative and coherent approach. The current work introduces a game-changing methodology in applying the DAS and other sound measuring optical or electronic technologies to measure 3-phase flow. The 3-phase flow measurement methodology is based on the measurements of SoS at different locations along the well where the pressure is greater than the bubble-point pressure (P&gt;Pb) at the first location and P&lt;Pb at the second location. A bulk velocity measurement is also necessary at one of the locations, preferably at the second location. The minimum required measurements to resolve 3-phase flow rates are SoS at both locations (SoS1 and SoS2), pressure/temperature (P/T) values at both locations (P1, P2, T1, T2), and the bulk velocity measurement at the second location (V2). Using these measurements, phase flow rate calculations in a 3-phase flow are possible. A Lego-like approach may be used with various sensor technologies to obtain these required measurements which are then used in a consecutive manner in 2-phase and 3-phase solution domains obtained using Wood and Korteweg-Lamb equations. The methodology is fully explained and the analytical solutions for 3-phase flow measurement is explicitly provided in a step-by-step approach. This approach provides significant advantages over the traditional methods. For example, SoS measurements along the well at multiple locations by using the same sensor technology or by combining different sensor technologies make this methodology highly flexible and applicable to custom-fit solutions. The method is independent of the sensor type as long as the sensors measure SoS, though the ideal systems that can adopt it easily and efficiently are DAS and optical flowmeters (OFMs). Additionally, a developing case history involving downhole OFMs installed in a North Sea field-wide application is discussed. The methodology may be implemented for a special case in which SoS is measured at the same location but at different times. This new methodology in measuring downhole 3-phase flow furthers the understanding of downhole multiphase flow measurement. It can be implemented in existing wells with optical infrastructure by adding an appropriate topside optoelectronics system when needed at later phases of production.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Chen, Gong. « Characterization of Variations in Cylinder Peak Pressure and its Position of High-Power Turbo-Charged Compression-Ignition Engines ». Dans ASME 2016 Internal Combustion Engine Division Fall Technical Conference. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/icef2016-9425.

Texte intégral
Résumé :
Cylinder peak pressure (pmax) over operating cycle of a high-power turbo-charged compression-ignition engine indicates its in-cylinder combustion behavior and also the level of mechanical load acting on its power assembly components. It is significantly important to understand how pmax with cylinder pressure (p) varies due to possible changes in engine design and operation input condition parameters. The input parameters considered in this paper include piston crank-angle position (θ), compression ratio (CR), amount of cycle burning heat (Q), injection/combustion duration (Δθ), and fuel injection/combustion-start timing (θs). Effects of the input parameters to pmax and θpmax which is the crank-angle position of pmax in engines of this type are analyzed, predicted and characterized. Results with the approaches to achieving those are presented. It is indicated from the results that the crank-angle position of combustion duration (Δθ) has a significant effect on θpmax for a given engine power density. As the position of Δθ varies, θpmax varies accordingly and can be determined. It is also indicated that as θs is sufficiently retarded from a position before the top dead center (TDC) to a point close to TDC, either before or after, in a large-bore high-power turbocharged engine, the trend of pmax variation would be reversed. This establishes the minimum value of pmax over the range of engine combustion-start timing variation. The results and indications are beneficial and usefully needed in adjusting the design and operation input condition parameters for achieving optimized balances between power-output capacity, fuel efficiency, exhaust emissions and mechanical/thermal loading of engines in this type.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Reddy, B. V. K., Matthew Barry, John Li et Minking K. Chyu. « Comprehensive Numerical Modeling of Thermoelectric Devices Applied to Automotive Exhaust Gas Waste-Heat Recovery ». Dans ASME 2013 Heat Transfer Summer Conference collocated with the ASME 2013 7th International Conference on Energy Sustainability and the ASME 2013 11th International Conference on Fuel Cell Science, Engineering and Technology. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/ht2013-17454.

Texte intégral
Résumé :
This study investigates using numerical methods the performance of thermoelectric devices (TEDs) integrated with heat exchangers and applied to automotive exhaust gas waste-heat recovery. Air as an exhaust gas and water as a cooling fluid are used. The effects of temperature-dependent properties of materials (TE elements, ceramic plates, connectors, insulation materials and fluids) and interface electrical and thermal contact resistances on TED’s performance are included in the analysis. Additionally, the fluid heat exchangers and the insulation materials are modeled using a porous media approach. The response of hot and cold fluid inlet temperatures (Thi, Tci) and flow rates, number of modules N, permeability of heat exchangers and TE materials type on TED’s hydro-thermoelectric characteristics is studied. An increase in either Thi or a decrease in Tci is resulted in an enhancement in TED’s performance. The addition of modules is shown a significant effect on heat input Qh and power output P0 predictions; however, a minimal impact on efficiency η is displayed with N. For instance, at Thi = 873.15 K and Tci = 353.15 K with clathrate n-Ba8Ga16Ge30 and p-PbTe material’s combination, compared to single module case, TED with four modules showed 3.77- and 3.7-fold increase in P0 and Qh, respectively. In the studied 1–4 modules range, the cold fluid flow rate and the permeability of heat exchangers are exhibited a negligible effect on TED’s P0 and η, whereas the hot fluid flow rate is shown an appreciable change in η values. Further, when Thi is less than 500 K, TED with bismuth-tellurides showed a higher performance when compared to the clathrates and lead-tellurides materials combination.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie