Добірка наукової літератури з теми "Conditional p-Value"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Conditional p-Value".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Conditional p-Value":

1

Booth, James G., Marinela Capanu, and Ludwig Heigenhauser. "Exact Conditional P Value Calculation for the Quasi-Symmetry Model." Journal of Computational and Graphical Statistics 14, no. 3 (September 1, 2005): 716–25. http://dx.doi.org/10.1198/106186005x59496.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

VanRaden, Mark, William C. Blackwelder, and Maria Deloria. "Relationship of P-value to conditional and predictive power in interim analysis." Controlled Clinical Trials 12, no. 5 (October 1991): 642. http://dx.doi.org/10.1016/0197-2456(91)90136-a.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Madden, L. V., D. A. Shah, and P. D. Esker. "Does the P Value Have a Future in Plant Pathology?" Phytopathology® 105, no. 11 (November 2015): 1400–1407. http://dx.doi.org/10.1094/phyto-07-15-0165-le.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The P value (significance level) is possibly the mostly widely used, and also misused, quantity in data analysis. P has been heavily criticized on philosophical and theoretical grounds, especially from a Bayesian perspective. In contrast, a properly interpreted P has been strongly defended as a measure of evidence against the null hypothesis, H0. We discuss the meaning of P and null-hypothesis statistical testing, and present some key arguments concerning their use. P is the probability of observing data as extreme as, or more extreme than, the data actually observed, conditional on H0 being true. However, P is often mistakenly equated with the posterior probability that H0 is true conditional on the data, which can lead to exaggerated claims about the effect of a treatment, experimental factor or interaction. Fortunately, a lower bound for the posterior probability of H0 can be approximated using P and the prior probability that H0 is true. When one is completely uncertain about the truth of H0 before an experiment (i.e., when the prior probability of H0 is 0.5), the posterior probability of H0 is much higher than P, which means that one needs P values lower than typically accepted for statistical significance (e.g., P = 0.05) for strong evidence against H0. When properly interpreted, we support the continued use of P as one component of a data analysis that emphasizes data visualization and estimation of effect sizes (treatment effects).
4

Jitmaneeroj, Boonlert. "The impact of dividend policy on price-earnings ratio." Review of Accounting and Finance 16, no. 1 (February 13, 2017): 125–40. http://dx.doi.org/10.1108/raf-06-2015-0092.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Purpose This paper aims to examine the conditional and nonlinear relationship between price-earnings (P/E) ratio and payout ratio. A common finding of previous studies using linear regression model is that the P/E ratio is positively related to the dividend payout ratio. However, none of them investigates the condition under which the positive relationship holds. Design/methodology/approach This paper uses the fixed effects model to investigate the conditional and nonlinear relationship between P/E ratio and payout ratio. With the inclusion of fundamental factors and investor sentiment, this model allows for nonlinear relationship to be conditioned on the return on equity and the required rate of return. Findings Based on the annual data of industries in the USA over the period of 1998-2014, this paper produces new evidence indicating that when the return on equity is greater (less) than the required rate of return, the P/E ratio and dividend payout ratio exhibit a negative (positive) relationship and positive (negative) convexity. Practical implications Due to the curvature relationship between P/E ratio and payout ratio, the corporate managers and stock investors should pay more attention to the reduction in payout ratio than the rising payout ratio and the companies with low payout ratios than the companies with high payout ratios. Originality/value No previous study has tackled the issue of conditional and nonlinear relationship between P/E ratio and payout ratio. This paper attempts to fill the gap by allowing for nonlinear relationship conditional on the relative values of the return on equity and the required rate of return.
5

DI NOLA, ANTONIO, and ROMANO SCOZZAFAVA. "PARTIAL ALGEBRAIC CONDITIONAL SPACES." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 12, no. 06 (December 2004): 781–89. http://dx.doi.org/10.1142/s021848850400320x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Conditioning plays a central role, both from a theoretical and practical point of view, in domains such as logic and probability, or rule–based expert systems. In classical approaches to probability, there is the notion of "conditional probability" P(E|H), but usually there is no meaning given to E|H itself. In 1935 de Finetti 5 was the first to mention "conditional events" outside the function P. We shall refer to a concept of conditional event extensively discussed in 4, where the idea of de Finetti of looking at E|H, with H≠∅ (the impossible event), as a three–valued logical entity (true when both E and H are true, false when H is true and E is false, "undetermined" when H is false) is generalized (or better, in a sense, is given up) by letting the third "value" t(E, H)suitably depend on the given ordered pair(E, H) and not being just an undetermined common value for all pairs. Here an axiomatic definition is given of Partial Algebraic Conditional Spaces (PACS), that is a set of conditional events endowed with two partial operations (denoted by ⊕ and ⊙): we then show that the structure discussed through a betting scheme in 4 (i.e., a class of particular random variables with suitable partial sum and product) is a "natural" model of a PCAS. Moreover, it turns out that the map t(E, H) can be looked on – with this choice of the two operations ⊕ and ⊙ – as a conditional probability (in its most general sense related to the concept of coherence) satisfying the classic de Finetti – Popper axioms.
6

Andrews, Donald W. K., Wooyoung Kim, and Xiaoxia Shi. "Commands for Testing Conditional Moment Inequalities and Equalities." Stata Journal: Promoting communications on statistics and Stata 17, no. 1 (March 2017): 56–72. http://dx.doi.org/10.1177/1536867x1701700104.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this article, we present two commands (cmi_test and cmi_interval) to implement the testing and inference methods for conditional moment inequality or equality models proposed in Andrews and Shi (2013, Econometrica 81: 609–666). The cmi_test command tests the validity of a finite number of conditional moment equalities or inequalities. This test returns the value of the test statistic, the critical values at significance levels 1%, 5%, and 10%, and the p-value. The cmi_interval command returns the confidence interval for a one-dimensional parameter defined by intersection bounds. We obtain this confidence interval by inverting cmi_test. All procedures implemented are uniformly asymptotically valid under appropriate conditions (specified in Andrews and Shi [2013]).
7

Burucuoglu, Murat, and Evrim Erdogan. "An Empirical Examination of the Relation between Consumption Values, Mobil Trust and Mobile Banking Adoption." International Business Research 9, no. 12 (November 23, 2016): 131. http://dx.doi.org/10.5539/ibr.v9n12p131.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p>The purpose of this study is to examine the relations among consumption values of the consumers relevant to mobile banking services, adoption to mobile banking and mobile trust. For this purpose, we propose a structural model which demonstrates the relations between consumption values, mobile banking adoption and mobile trust of consumers. The data had been collected through survey applied on individuals who are using mobile banking services in Turkey. It had been reached to 175 participants in total. The obtained data had been analyzed by partial least squares path analysis (PLS-SEM) which is known as second generation structural equation modeling. As the result of the research, it had been concluded that the conditional value, emotional value and epistemic value –from among consumption values- have positive and statistically meaningful effect on adoption to mobile banking, and that the social value has negative and statistically meaningful effect. It is being observed that there is positive and statistically meaningful relation in between trust relevant to mobile banking and conditional value, emotional value and functional value. And there are positive and statistically meaningful relations on trust relevant to mobile banking and adoption to mobile banking.</p>
8

Gourieroux, Christian, and Joann Jasiak. "Local Likelihood Density Estimation and Value-at-Risk." Journal of Probability and Statistics 2010 (2010): 1–26. http://dx.doi.org/10.1155/2010/754851.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper presents a new nonparametric method for computing the conditional Value-at-Risk, based on a local approximation of the conditional density function in a neighborhood of a predetermined extreme value for univariate and multivariate series of portfolio returns. For illustration, the method is applied to intraday VaR estimation on portfolios of two stocks traded on the Toronto Stock Exchange. The performance of the new VaR computation method is compared to the historical simulation, variance-covariance, and J. P. Morgan methods.
9

Zhan, Likan, and Peng Zhou. "The Online Processing of Hypothetical Events." Experimental Psychology 70, no. 2 (March 2023): 108–17. http://dx.doi.org/10.1027/1618-3169/a000579.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. A conditional statement If P then Q is formed by combining the two propositions P and Q together with the conditional connective If ··· then ···. When embedded under the conditional connective, the two propositions P and Q describe hypothetical events that are not actualized. It remains unclear when such hypothetical thinking is activated in the real-time comprehension of conditional statements. To tackle this problem, we conducted an eye-tracking experiment using the visual world paradigm. Participants’ eye movements on the concurrent image were recorded when they were listening to the auditorily presented conditional statements. Depending on when and what critical information is added into the auditory input, there are four possible temporal slots to observe in the online processing of the conditional statement: the sentential connective If, the antecedent P, the consequent Q, and the processing of the sentence following the conditional. We mainly focused on the first three slots. First, the occurrence of the conditional connective should trigger participants to search in the visual world for the event that could not assign a truth-value to the embedded proposition. Second, if the embedded proposition P can be determined as true by an event, the hypothetical property implied by the connective would prevent the participants from excluding the consideration of other events. The consideration of other events would yield more fixations on the events where the proposition is false.
10

Lahiani, Amine, and Khaled Guesmi. "Commodity Price Correlation And Time Varying Hedge Ratios." Journal of Applied Business Research (JABR) 30, no. 4 (June 30, 2014): 1053. http://dx.doi.org/10.19030/jabr.v30i4.8653.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p>This paper examines the price volatility and hedging behavior of commodity futures indices and stock market indices. We investigate the weekly hedging strategies generated by return-based and range-based asymmetric dynamic conditional correlation (DCC) processes. The hedging performances of short and long hedgers are estimated with a semi-variance, low partial moment and conditional value-at-risk. The empirical results show that range-based DCC model outperforms return-based DCC model for most cases.</p>

Дисертації з теми "Conditional p-Value":

1

Pluntz, Matthieu. "Sélection de variables en grande dimension par le Lasso et tests statistiques - application à la pharmacovigilance." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASR002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
La sélection de variables dans une régression de grande dimension est un problème classique dans l'exploitation de données de santé, où l'on cherche à identifier un nombre limité de facteurs associés à un évènement parmi un grand nombre de variables candidates : facteurs génétiques, expositions environnementales ou médicamenteuses.La régression Lasso (Tibshirani, 1996) fournit une suite de modèles parcimonieux où les variables apparaissent les unes après les autres suivant la valeur du paramètre de régularisation. Elle doit s'accompagner d'une procédure du choix de ce paramètre et donc du modèle associé. Nous proposons ici des procédures de sélection d'un des modèles du chemin du Lasso qui font partie, ou s'inspirent, du paradigme des tests statistiques. De la sorte, nous cherchons à contrôler le risque de sélection d'au moins un faux positif (Family-Wise Error Rate, FWER), au contraire de la plupart des méthodes existantes de post-traitement du Lasso qui acceptent plus facilement des faux positifs.Notre première proposition est une généralisation du critère d'information d'Akaike (AIC) que nous appelons AIC étendu (EAIC). La log-vraisemblance du modèle considéré y est pénalisée par son nombre de paramètres affecté d'un poids qui est fonction du nombre total de variables candidates et du niveau visé de FWER, mais pas du nombre d'observations. Nous obtenons cette fonction en rapprochant la comparaison de critères d'information de sous-modèles emboîtés d'une régression en grande dimension, de tests multiples du rapport de vraisemblance sur lesquels nous démontrons un résultat asymptotique.Notre deuxième proposition est un test de la significativité d'une variable apparaissant sur le chemin du Lasso. Son hypothèse nulle dépend d'un ensemble A de variables déjà sélectionnées et énonce qu'il contient toutes les variables actives. Nous cherchons à prendre comme statistique de test la valeur du paramètre de régularisation à partir de laquelle une première variable en dehors de A est sélectionnée par le Lasso. Ce choix se heurte au fait que l'hypothèse nulle n'est pas assez spécifiée pour définir la loi de cette statistique et donc sa p-value. Nous résolvons cela en lui substituant sa p-value conditionnelle, définie conditionnellement aux coefficients estimés du modèle non pénalisé restreint à A. Nous estimons celle-ci par un algorithme que nous appelons simulation-calibration, où des vecteurs réponses sont simulés puis calibrés sur les coefficients estimés du vecteur réponse observé. Nous adaptons de façon heuristique la calibration au cas des modèles linéaires généralisés (binaire et de Poisson) dans lesquels elle est une procédure itérative et stochastique. Nous prouvons que l'utilisation du test permet de contrôler le risque de sélection d'un faux positif dans les modèles linéaires, à la fois lorsque l'hypothèse nulle est vérifiée mais aussi, sous une condition de corrélation, lorsque A ne contient pas toutes les variables actives.Nous mesurons les performances des deux procédures par des études de simulations extensives, portant à la fois sur la sélection éventuelle d'une variable sous l'hypothèse nulle (ou son équivalent pour l'EAIC) et sur la procédure globale de sélection d'un modèle. Nous observons que nos propositions se comparent de façon satisfaisante à leurs équivalents les plus proches déjà existants, BIC et ses versions étendues pour l'EAIC et le test de covariance de Lockhart et al. (2014) pour le test par simulation-calibration. Nous illustrons également les deux procédures dans la détection d'expositions médicamenteuses associées aux pathologies hépatiques (drug-induced liver injuries, DILI) dans la base nationale de pharmacovigilance (BNPV) en mesurant leurs performances grâce à l'ensemble de référence DILIrank d'associations connues
Variable selection in high-dimensional regressions is a classic problem in health data analysis. It aims to identify a limited number of factors associated with a given health event among a large number of candidate variables such as genetic factors or environmental or drug exposures.The Lasso regression (Tibshirani, 1996) provides a series of sparse models where variables appear one after another depending on the regularization parameter's value. It requires a procedure for choosing this parameter and thus the associated model. In this thesis, we propose procedures for selecting one of the models of the Lasso path, which belong to or are inspired by the statistical testing paradigm. Thus, we aim to control the risk of selecting at least one false positive (Family-Wise Error Rate, FWER) unlike most existing post-processing methods of the Lasso, which accept false positives more easily.Our first proposal is a generalization of the Akaike Information Criterion (AIC) which we call the Extended AIC (EAIC). We penalize the log-likelihood of the model under consideration by its number of parameters weighted by a function of the total number of candidate variables and the targeted level of FWER but not the number of observations. We obtain this function by observing the relationship between comparing the information criteria of nested sub-models of a high-dimensional regression, and performing multiple likelihood ratio test, about which we prove an asymptotic property.Our second proposal is a test of the significance of a variable appearing on the Lasso path. Its null hypothesis depends on a set A of already selected variables and states that it contains all the active variables. As the test statistic, we aim to use the regularization parameter value from which a first variable outside A is selected by Lasso. This choice faces the fact that the null hypothesis is not specific enough to define the distribution of this statistic and thus its p-value. We solve this by replacing the statistic with its conditional p-value, which we define conditional on the non-penalized estimated coefficients of the model restricted to A. We estimate the conditional p-value with an algorithm that we call simulation-calibration, where we simulate outcome vectors and then calibrate them on the observed outcome‘s estimated coefficients. We adapt the calibration heuristically to the case of generalized linear models (binary and Poisson) in which it turns into an iterative and stochastic procedure. We prove that using our test controls the risk of selecting a false positive in linear models, both when the null hypothesis is verified and, under a correlation condition, when the set A does not contain all active variables.We evaluate the performance of both procedures through extensive simulation studies, which cover both the potential selection of a variable under the null hypothesis (or its equivalent for EAIC) and on the overall model selection procedure. We observe that our proposals compare well to their closest existing counterparts, the BIC and its extended versions for the EAIC, and Lockhart et al.'s (2014) covariance test for the simulation-calibration test. We also illustrate both procedures in the detection of exposures associated with drug-induced liver injuries (DILI) in the French national pharmacovigilance database (BNPV) by measuring their performance using the DILIrank reference set of known associations
2

Fonseca, Pedro Miguel Teles da. "Digit analysis using Benford's Law : a bayesian approach." Master's thesis, Instituto Superior de Economia e Gestão, 2016. http://hdl.handle.net/10400.5/13105.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Mestrado em Econometria Aplicada e Previsão
A lei de Benford, regularidade empírica segundo a qual muitos dos conjuntos de números gerados sem intervenção humana exibem um padrão de decaimento logarítmico nas frequências de ocorrência de primeiros dígitos, pode ser utilizada para, através da análise da frequência de dígitos, identificar conjuntos de números potencialmente erróneos ou fraudulentos. Devido ao elevado nível de potência alcançado pelos testes de hipóteses clássicos de dimensão fixa em amostras grandes, espera-se que, se a amostra for suficientemente grande, estes consigam identificar qualquer desvio em relação à lei de Benford, por mais pequeno que seja, como sendo estatisticamente significativo. Isto pode levar à rejeição da presença da lei de Benford em amostras onde o desvio em relação à mesma não tem significância prática e à identificação de amostras legitimas como sendo fraudulentas. Esta dissertação sugere uma abordagem baseada na seleção bayesiana de modelos. A metodologia proposta é aplicada num estudo empírico que utiliza estatísticas macroeconómicas de países da Zona Euro e explora o conflito entre o valor-p e as medidas bayesianas de evidência (fator de Bayes e probabilidades a posteriori) a nível do suporte por elas fornecido à presença da lei de Benford numa amostra. Conclui-se que os testes clássicos rejeitam frequentemente a presença da lei de Benford em amostras onde as medidas bayesianas são favoráveis à sua presença, e que mesmo limites inferiores destas medidas sobre largas famílias de distribuições a priori frequentemente fornecem bastante mais suporte à presença da lei de Benford do que o valor-p e os testes clássicos.
According to Benford's law, many of the collections of numbers which are generated without human intervention exhibit a logarithmically decaying pattern in leading digit frequencies. Through digit analysis, this empirical regularity can help identifying erroneous or fraudulent data. Due to the power that classical significance tests with fixed dimension attain in large samples, they produce small p-values and, if the sample is big enough, are able to identify any deviation from Benford's law, no matter how tiny, as statistically significant. This may result in the rejection of Benford's law in samples where the deviations from it are without practical importance, and consequently samples which are legit are likely to be classified as erroneous or fraudulent. This dissertation proposes a Bayesian model selection approach to digit analysis. An empirical application with macroeconomic statistics from Eurozone countries demonstrates the applicability of the suggested methodology and explores the conflict between the p-value and Bayesian measures of evidence (Bayes factors and posterior probabilities) in the support they provide to the presence of Benford's law in a given sample. It is concluded that classical significance tests often reject the presence of Benford's law in samples which are deemed to be in conformance to it by Bayesian measures, and that even lower bounds on such measures over wide classes of prior distributions often provide more evidence in favour of Benford's law than the p-value and classical significance tests seem to suggest.
info:eu-repo/semantics/publishedVersion
3

GIOÈ, MAURO. "Use and misuse of P-values: a conditional approach to post-model-selection inference." Doctoral thesis, Università degli studi di Pavia, 2021. http://hdl.handle.net/11571/1422614.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Adaptive generation of hypotheses is among the main culprits of the lack of replicability in science. Under conditions of uncertainty, the statements, or the process that generates them, can only be trusted if the reported error rates are reflected in the replication attempts. The discrepancy between the two is due to many factors, but interactive data analysis plays a major role in the inflation of type I error. In this regard, inference after model selection is of particular interest because its misuse can be analyzed through a Monte Carlo simulation. As the findings of this thesis show, inflation of type I error can be quite severe even in low dimensional scenarios, with up to 40% of false positives in the selected set of variables. Depending on the model selection strategy and the structure of the true data-generating mechanism, this percentage varies greatly. The results of the simulation show different performances between the Least Absolute Shrinkage and Selection Operator (LASSO) and the Forward Selection (FS). In particular, the LASSO yields a type I error lower than the FS when the structure of the true data-generating mechanism is additive and a higher one when the structure is multiplicative. The results also provide additional empirical evidence that given an extensive class of problems, most methods will provide on average comparable solutions. As shown in this thesis, the conditional probability approach to selective inference represents a viable solution to control type I error while avoiding any data loss due to data splitting. In the current research environment, incentives and funding policies need to be reshaped in order to bring about effective changes on the overall reliability of the published papers, but the tools to provide rigorous results, while meeting the needs of the researchers, are available for anyone conscientious enough.
Adaptive generation of hypotheses is among the main culprits of the lack of replicability in science. Under conditions of uncertainty, the statements, or the process that generates them, can only be trusted if the reported error rates are reflected in the replication attempts. The discrepancy between the two is due to many factors, but interactive data analysis plays a major role in the inflation of type I error. In this regard, inference after model selection is of particular interest because its misuse can be analyzed through a Monte Carlo simulation. As the findings of this thesis show, inflation of type I error can be quite severe even in low dimensional scenarios, with up to 40% of false positives in the selected set of variables. Depending on the model selection strategy and the structure of the true data-generating mechanism, this percentage varies greatly. The results of the simulation show different performances between the Least Absolute Shrinkage and Selection Operator (LASSO) and the Forward Selection (FS). In particular, the LASSO yields a type I error lower than the FS when the structure of the true data-generating mechanism is additive and a higher one when the structure is multiplicative. The results also provide additional empirical evidence that given an extensive class of problems, most methods will provide on average comparable solutions. As shown in this thesis, the conditional probability approach to selective inference represents a viable solution to control type I error while avoiding any data loss due to data splitting. In the current research environment, incentives and funding policies need to be reshaped in order to bring about effective changes on the overall reliability of the published papers, but the tools to provide rigorous results, while meeting the needs of the researchers, are available for anyone conscientious enough.
4

Werning, Jan P. [Verfasser], Stefan [Gutachter] Spinler, and Carl Marcus [Gutachter] Wallenburg. "The transition from linear towards circular economy business models : theoretical and empirical study of boundary conditions and other effects on the value chain / Jan P. Werning ; Gutachter: Stefan Spinler, Carl Marcus Wallenburg." Vallendar : WHU - Otto Beisheim School of Management, 2021. http://d-nb.info/1231792108/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Daly, Fiona Frances Margaret. "The effect of diet on the nutrition and production of merino ewes in the arid shrublands of Western Australia." Thesis, Curtin University, 2009. http://hdl.handle.net/20.500.11937/570.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
For the Arid Shrublands of Western Australia (WA) knowledge is limited on what sheep eat and how nutritious their diets are. A study was undertaken on two stations near Yalgoo (28º18’S 116º38’E) in WA, from November 2005 to December 2007. Station 1 (28º39’S 116º18’E) used a flexible rotational grazing management system (RGS), moving 3000-4000 Merino sheep every 3 – 6 weeks through a choice of 20 paddocks. Station 2 (28º18’S 116º42’E) used a flexible continuous grazing management system where small mobs (500 sheep) stayed in paddocks all year, until shearing. Two paddocks on Station 2 were chosen to represent paddocks with high (CGS-G) and low (CGS-P) feed value.A total of 300 Merino hogget (18 months old) ewes were randomly selected from the stations. One hundred and fifty sheep from each station were selected and separated into three mobs of 50 sheep by stratifying live weights. The selected sheep were allocated to either of the two paddocks on Station 2 or the single rotating mob on Station 1. Therefore there were a total of 100 sheep, 50 from each station, on each of the two paddocks on Station 2 and the one rotating mob on Station 1.Throughout the study period sheep live weights, body condition scores (BCS) and wool production were measured and related to plant photosynthetic activity (derived from Normalised Difference Vegetation Index - NDVI), and dietary energy, protein and digestibility (determined from faecal NIRS calibrations). A DNA reference data bank of some common native plant species was established and then used as a library to identify plant species in sheep faeces and thus provide information on variations in diet composition over the study period. Plant nutritional content was also measured and compared to climatic changes and sheep nutrition.Over the study period Merino ewe live weights, wool production, faecal samples and native plant leaf material were collected and analysed from each of the three management treatments (RGS, CGS-G, CGS-P). Wool production measurements included wool length, strength and fibre diameter, including position of breaks, minimum and maximum diameter along the staple of midside samples. Oven dried plant and faecal samples were ground and subsequently analysed for proximate composition. Plant samples were further analysed for mineral contents and 24 h in vitro gas production (GP) using the rumen buffer gas fermentation technique. Organic matter digestibility (OMD) and metabolisable energy (ME) content of the plants were determined using 24 h net gas production. Faecal near infrared reflectance spectroscopy (fNIRS) calibrations, developed by Curtin University of Technology and ChemCentre WA, were used to predict the nutritional attributes of sheep diets.Sheep production was found to be affected by rainfall, seasons, management and differing blood lines. In 2006, live weights, BCS and wool fibre diameter increased in response to high summer rainfall. Lower rainfall in 2007 resulted in variable, but generally less animal production with lower live weights, BCS and wool fibre diameter. Management decisions to avoid mating in 2006 on CGS; and agistment for sheep on RGS at the end of 2006 resulted in better sheep production results. Sheep originally sourced from Station 2 generally had higher live weights than sheep sourced from Station 1, suggesting a difference in bloodlines.Faecal DNA provided useful information regarding diet selection and diversity of sheep grazing on the Arid Shrublands of WA. Of the species that were DNA profiled, the sheep ate Acacia saligna, Aristida contorta, Atriplex spp., Enchylaena tomentosa, Frankenia sp., Ptilotus obovatus, Rhagodia eremaea and Scaevola spinescens in 2006 whilst in 2007; the sheep consumed A. saligna, A. contorta, Atriplex spp., Eremophila forrestii, Enneapogon caerulescens, Frankenia spp., Maireana spp., Ptilotus obovatus, Rhagodia eremaea, Solanum lasiophyllum and Stipa elegantissima. However, there were 28 amplified bands in 2006 and 51 in 2007 that did not conclusively match any of the reference plant species. This indicates that the sheep were consuming diets that contained more species than what was analysed in this study. Faecal DNA results indicated a decrease in the diversity of the diets selected by the sheep during summer, which coincided with a decrease in animal production.Native plants were found to be low in OMD and ME, but high in crude protein (CP), and variable in mineral content. Sheep were able to select diets adequate in OMD, ME and CP for maintenance requirements, and low in tannins and phenolics, although continuous drought conditions resulted in reduced production, indicating that the sheep were not getting adequate nutrition to meet their growth requirements. The use of fNIRS provided more useful information about the quality of the diet of the sheep than nutritionally profiling individual plants. NDVI was found to be related to dietary OMD and wool fibre diameter changes along the staple.Overall, the effects of management seemed to be secondary to the effects of climate on sheep production and nutrition. The statistical accuracy of results was low; however, the use of advanced technologies to explore relationships between climate, plant nutritional profiles and animal production and nutrition has provided an expansion of knowledge of sheep nutrition in the region. This extra knowledge may help land owners in the region to make more sustainable management decisions concerning livestock management and grazing pressures on native pastures.

Книги з теми "Conditional p-Value":

1

Sitnova, Irina, Vladimir Yadov, and Svetlana Kirdina-Chendler. Institutional changes in modern Russia: activist-activity approach. ru: INFRA-M Academic Publishing LLC., 2023. http://dx.doi.org/10.12737/1871442.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dissatisfaction with societal structuralist approaches with culturological determinism characteristic of them resulted in another crisis of modern sociology. Traditional sociology ignored a person capable of making individual decisions and making informed choices, and in traditional economic theory this person hung in an airless space in the absence of supportive social structures. Sociologists began to show interest in what is happening in neo-institutional economic theory, and, moreover, intensively borrow its conceptual apparatus. Attempts to resolve the crisis are demonstrated today by theorists of the activist-activity direction M. Archer, E. Giddens, P. Shtompka, and in the field of economics - neo-institutionalists J. Commans, R. Krouse, D. North, T. Veblen, et al. The monograph represents the activist paradigm shared by us, the basic principle of which goes back to K. Marx's formula that people, being born under the same conditions, change them by their practical activities, changing themselves. The task of the research is to find an explanation for institutional changes in a certain value conceptual model and subsequently apply it to the analysis of Russian reality. For students, postgraduates and teachers of sociological universities and faculties.
2

Skiba, Grzegorz. Fizjologiczne, żywieniowe i genetyczne uwarunkowania właściwości kości rosnących świń. The Kielanowski Institute of Animal Physiology and Nutrition, Polish Academy of Sciences, 2020. http://dx.doi.org/10.22358/mono_gs_2020.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Bones are multifunctional passive organs of movement that supports soft tissue and directly attached muscles. They also protect internal organs and are a reserve of calcium, phosphorus and magnesium. Each bone is covered with periosteum, and the adjacent bone surfaces are covered by articular cartilage. Histologically, the bone is an organ composed of many different tissues. The main component is bone tissue (cortical and spongy) composed of a set of bone cells and intercellular substance (mineral and organic), it also contains fat, hematopoietic (bone marrow) and cartilaginous tissue. Bones are a tissue that even in adult life retains the ability to change shape and structure depending on changes in their mechanical and hormonal environment, as well as self-renewal and repair capabilities. This process is called bone turnover. The basic processes of bone turnover are: • bone modeling (incessantly changes in bone shape during individual growth) following resorption and tissue formation at various locations (e.g. bone marrow formation) to increase mass and skeletal morphology. This process occurs in the bones of growing individuals and stops after reaching puberty • bone remodeling (processes involve in maintaining bone tissue by resorbing and replacing old bone tissue with new tissue in the same place, e.g. repairing micro fractures). It is a process involving the removal and internal remodeling of existing bone and is responsible for maintaining tissue mass and architecture of mature bones. Bone turnover is regulated by two types of transformation: • osteoclastogenesis, i.e. formation of cells responsible for bone resorption • osteoblastogenesis, i.e. formation of cells responsible for bone formation (bone matrix synthesis and mineralization) Bone maturity can be defined as the completion of basic structural development and mineralization leading to maximum mass and optimal mechanical strength. The highest rate of increase in pig bone mass is observed in the first twelve weeks after birth. This period of growth is considered crucial for optimizing the growth of the skeleton of pigs, because the degree of bone mineralization in later life stages (adulthood) depends largely on the amount of bone minerals accumulated in the early stages of their growth. The development of the technique allows to determine the condition of the skeletal system (or individual bones) in living animals by methods used in human medicine, or after their slaughter. For in vivo determination of bone properties, Abstract 10 double energy X-ray absorptiometry or computed tomography scanning techniques are used. Both methods allow the quantification of mineral content and bone mineral density. The most important property from a practical point of view is the bone’s bending strength, which is directly determined by the maximum bending force. The most important factors affecting bone strength are: • age (growth period), • gender and the associated hormonal balance, • genotype and modification of genes responsible for bone growth • chemical composition of the body (protein and fat content, and the proportion between these components), • physical activity and related bone load, • nutritional factors: – protein intake influencing synthesis of organic matrix of bone, – content of minerals in the feed (CA, P, Zn, Ca/P, Mg, Mn, Na, Cl, K, Cu ratio) influencing synthesis of the inorganic matrix of bone, – mineral/protein ratio in the diet (Ca/protein, P/protein, Zn/protein) – feed energy concentration, – energy source (content of saturated fatty acids - SFA, content of polyun saturated fatty acids - PUFA, in particular ALA, EPA, DPA, DHA), – feed additives, in particular: enzymes (e.g. phytase releasing of minerals bounded in phytin complexes), probiotics and prebiotics (e.g. inulin improving the function of the digestive tract by increasing absorption of nutrients), – vitamin content that regulate metabolism and biochemical changes occurring in bone tissue (e.g. vitamin D3, B6, C and K). This study was based on the results of research experiments from available literature, and studies on growing pigs carried out at the Kielanowski Institute of Animal Physiology and Nutrition, Polish Academy of Sciences. The tests were performed in total on 300 pigs of Duroc, Pietrain, Puławska breeds, line 990 and hybrids (Great White × Duroc, Great White × Landrace), PIC pigs, slaughtered at different body weight during the growth period from 15 to 130 kg. Bones for biomechanical tests were collected after slaughter from each pig. Their length, mass and volume were determined. Based on these measurements, the specific weight (density, g/cm3) was calculated. Then each bone was cut in the middle of the shaft and the outer and inner diameters were measured both horizontally and vertically. Based on these measurements, the following indicators were calculated: • cortical thickness, • cortical surface, • cortical index. Abstract 11 Bone strength was tested by a three-point bending test. The obtained data enabled the determination of: • bending force (the magnitude of the maximum force at which disintegration and disruption of bone structure occurs), • strength (the amount of maximum force needed to break/crack of bone), • stiffness (quotient of the force acting on the bone and the amount of displacement occurring under the influence of this force). Investigation of changes in physical and biomechanical features of bones during growth was performed on pigs of the synthetic 990 line growing from 15 to 130 kg body weight. The animals were slaughtered successively at a body weight of 15, 30, 40, 50, 70, 90, 110 and 130 kg. After slaughter, the following bones were separated from the right half-carcass: humerus, 3rd and 4th metatarsal bone, femur, tibia and fibula as well as 3rd and 4th metatarsal bone. The features of bones were determined using methods described in the methodology. Describing bone growth with the Gompertz equation, it was found that the earliest slowdown of bone growth curve was observed for metacarpal and metatarsal bones. This means that these bones matured the most quickly. The established data also indicate that the rib is the slowest maturing bone. The femur, humerus, tibia and fibula were between the values of these features for the metatarsal, metacarpal and rib bones. The rate of increase in bone mass and length differed significantly between the examined bones, but in all cases it was lower (coefficient b <1) than the growth rate of the whole body of the animal. The fastest growth rate was estimated for the rib mass (coefficient b = 0.93). Among the long bones, the humerus (coefficient b = 0.81) was characterized by the fastest rate of weight gain, however femur the smallest (coefficient b = 0.71). The lowest rate of bone mass increase was observed in the foot bones, with the metacarpal bones having a slightly higher value of coefficient b than the metatarsal bones (0.67 vs 0.62). The third bone had a lower growth rate than the fourth bone, regardless of whether they were metatarsal or metacarpal. The value of the bending force increased as the animals grew. Regardless of the growth point tested, the highest values were observed for the humerus, tibia and femur, smaller for the metatarsal and metacarpal bone, and the lowest for the fibula and rib. The rate of change in the value of this indicator increased at a similar rate as the body weight changes of the animals in the case of the fibula and the fourth metacarpal bone (b value = 0.98), and more slowly in the case of the metatarsal bone, the third metacarpal bone, and the tibia bone (values of the b ratio 0.81–0.85), and the slowest femur, humerus and rib (value of b = 0.60–0.66). Bone stiffness increased as animals grew. Regardless of the growth point tested, the highest values were observed for the humerus, tibia and femur, smaller for the metatarsal and metacarpal bone, and the lowest for the fibula and rib. Abstract 12 The rate of change in the value of this indicator changed at a faster rate than the increase in weight of pigs in the case of metacarpal and metatarsal bones (coefficient b = 1.01–1.22), slightly slower in the case of fibula (coefficient b = 0.92), definitely slower in the case of the tibia (b = 0.73), ribs (b = 0.66), femur (b = 0.59) and humerus (b = 0.50). Bone strength increased as animals grew. Regardless of the growth point tested, bone strength was as follows femur > tibia > humerus > 4 metacarpal> 3 metacarpal> 3 metatarsal > 4 metatarsal > rib> fibula. The rate of increase in strength of all examined bones was greater than the rate of weight gain of pigs (value of the coefficient b = 2.04–3.26). As the animals grew, the bone density increased. However, the growth rate of this indicator for the majority of bones was slower than the rate of weight gain (the value of the coefficient b ranged from 0.37 – humerus to 0.84 – fibula). The exception was the rib, whose density increased at a similar pace increasing the body weight of animals (value of the coefficient b = 0.97). The study on the influence of the breed and the feeding intensity on bone characteristics (physical and biomechanical) was performed on pigs of the breeds Duroc, Pietrain, and synthetic 990 during a growth period of 15 to 70 kg body weight. Animals were fed ad libitum or dosed system. After slaughter at a body weight of 70 kg, three bones were taken from the right half-carcass: femur, three metatarsal, and three metacarpal and subjected to the determinations described in the methodology. The weight of bones of animals fed aa libitum was significantly lower than in pigs fed restrictively All bones of Duroc breed were significantly heavier and longer than Pietrain and 990 pig bones. The average values of bending force for the examined bones took the following order: III metatarsal bone (63.5 kg) <III metacarpal bone (77.9 kg) <femur (271.5 kg). The feeding system and breed of pigs had no significant effect on the value of this indicator. The average values of the bones strength took the following order: III metatarsal bone (92.6 kg) <III metacarpal (107.2 kg) <femur (353.1 kg). Feeding intensity and breed of animals had no significant effect on the value of this feature of the bones tested. The average bone density took the following order: femur (1.23 g/cm3) <III metatarsal bone (1.26 g/cm3) <III metacarpal bone (1.34 g / cm3). The density of bones of animals fed aa libitum was higher (P<0.01) than in animals fed with a dosing system. The density of examined bones within the breeds took the following order: Pietrain race> line 990> Duroc race. The differences between the “extreme” breeds were: 7.2% (III metatarsal bone), 8.3% (III metacarpal bone), 8.4% (femur). Abstract 13 The average bone stiffness took the following order: III metatarsal bone (35.1 kg/mm) <III metacarpus (41.5 kg/mm) <femur (60.5 kg/mm). This indicator did not differ between the groups of pigs fed at different intensity, except for the metacarpal bone, which was more stiffer in pigs fed aa libitum (P<0.05). The femur of animals fed ad libitum showed a tendency (P<0.09) to be more stiffer and a force of 4.5 kg required for its displacement by 1 mm. Breed differences in stiffness were found for the femur (P <0.05) and III metacarpal bone (P <0.05). For femur, the highest value of this indicator was found in Pietrain pigs (64.5 kg/mm), lower in pigs of 990 line (61.6 kg/mm) and the lowest in Duroc pigs (55.3 kg/mm). In turn, the 3rd metacarpal bone of Duroc and Pietrain pigs had similar stiffness (39.0 and 40.0 kg/mm respectively) and was smaller than that of line 990 pigs (45.4 kg/mm). The thickness of the cortical bone layer took the following order: III metatarsal bone (2.25 mm) <III metacarpal bone (2.41 mm) <femur (5.12 mm). The feeding system did not affect this indicator. Breed differences (P <0.05) for this trait were found only for the femur bone: Duroc (5.42 mm)> line 990 (5.13 mm)> Pietrain (4.81 mm). The cross sectional area of the examined bones was arranged in the following order: III metatarsal bone (84 mm2) <III metacarpal bone (90 mm2) <femur (286 mm2). The feeding system had no effect on the value of this bone trait, with the exception of the femur, which in animals fed the dosing system was 4.7% higher (P<0.05) than in pigs fed ad libitum. Breed differences (P<0.01) in the coross sectional area were found only in femur and III metatarsal bone. The value of this indicator was the highest in Duroc pigs, lower in 990 animals and the lowest in Pietrain pigs. The cortical index of individual bones was in the following order: III metatarsal bone (31.86) <III metacarpal bone (33.86) <femur (44.75). However, its value did not significantly depend on the intensity of feeding or the breed of pigs.
3

Shengelia, Revaz. Modern Economics. Universal, Georgia, 2021. http://dx.doi.org/10.36962/rsme012021.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Economy and mankind are inextricably interlinked. Just as the economy or the production of material wealth is unimaginable without a man, so human existence and development are impossible without the wealth created in the economy. Shortly, both the goal and the means of achieving and realization of the economy are still the human resources. People have long ago noticed that it was the economy that created livelihoods, and the delays in their production led to the catastrophic events such as hunger, poverty, civil wars, social upheavals, revolutions, moral degeneration, and more. Therefore, the special interest of people in understanding the regulatory framework of the functioning of the economy has existed and exists in all historical epochs [A. Sisvadze. Economic theory. Part One. 2006y. p. 22]. The system of economic disciplines studies economy or economic activities of a society. All of them are based on science, which is currently called economic theory in the post-socialist space (the science of economics, the principles of economics or modern economics), and in most countries of the world - predominantly in the Greek-Latin manner - economics. The title of the present book is also Modern Economics. Economics (economic theory) is the science that studies the efficient use of limited resources to produce and distribute goods and services in order to satisfy as much as possible the unlimited needs and demands of the society. More simply, economics is the science of choice and how society manages its limited resources. Moreover, it should be emphasized that economics (economic theory) studies only the distribution, exchange and consumption of the economic wealth (food, beverages, clothing, housing, machine tools, computers, services, etc.), the production of which is possible and limited. And the wealth that exists indefinitely: no economic relations are formed in the production and distribution of solar energy, air, and the like. This current book is the second complete updated edition of the challenges of the modern global economy in the context of the coronary crisis, taking into account some of the priority directions of the country's development. Its purpose is to help students and interested readers gain a thorough knowledge of economics and show them how this knowledge can be applied pragmatically (professionally) in professional activities or in everyday life. To achieve this goal, this textbook, which consists of two parts and tests, discusses in simple and clear language issues such as: the essence of economics as a science, reasons for origin, purpose, tasks, usefulness and functions; Basic principles, problems and peculiarities of economics in different economic systems; Needs and demand, the essence of economic resources, types and limitations; Interaction, mobility, interchangeability and efficient use of economic resources. The essence and types of wealth; The essence, types and models of the economic system; The interaction of households and firms in the market of resources and products; Market mechanism and its elements - demand, supply and price; Demand and supply elasticity; Production costs and the ways to reduce them; Forms of the market - perfect and incomplete competition markets and their peculiarities; Markets for Production Factors and factor incomes; The essence of macroeconomics, causes and importance of origin; The essence and calculation of key macroeconomic indicators (gross national product, gross domestic product, net national product, national income, etc.); Macroeconomic stability and instability, unemployment, inflation and anti-inflationary policies; State regulation of the economy and economic policy; Monetary and fiscal policy; Income and standard of living; Economic Growth; The Corona Pandemic as a Defect and Effect of Globalization; National Economic Problems and New Opportunities for Development in the conditions of the Coronary Crisis; The Socio-economic problems of moral obsolescence in digital technologies; Education and creativity are the main solution way to overcome the economic crisis caused by the coronavirus; Positive and negative effects of tourism in Georgia; Formation of the middle class as a contributing factor to the development of tourism in Georgia; Corporate culture in Georgian travel companies, etc. The axiomatic truth is that economics is the union of people in constant interaction. Given that the behavior of the economy reflects the behavior of the people who make up the economy, after clarifying the essence of the economy, we move on to the analysis of the four principles of individual decision-making. Furtermore, the book describes how people make independent decisions. The key to making an individual decision is that people have to choose from alternative options, that the value of any action is measured by the value of what must be given or what must be given up to get something, that the rational, smart people make decisions based on the comparison of the marginal costs and marginal returns (benefits), and that people behave accordingly to stimuli. Afterwards, the need for human interaction is then analyzed and substantiated. If a person is isolated, he will have to take care of his own food, clothes, shoes, his own house and so on. In the case of such a closed economy and universalization of labor, firstly, its productivity will be low and, secondly, it will be able to consume only what it produces. It is clear that human productivity will be higher and more profitable as a result of labor specialization and the opportunity to trade with others. Indeed, trade allows each person to specialize, to engage in the activities that are most successful, be it agriculture, sewing or construction, and to buy more diverse goods and services from others at a relatively lower price. The key to such human interactions is that trade is mutually beneficial; That markets are usually the good means of coordination between people and that the government can improve the results of market functioning if the market reveals weakness or the results of market functioning are not fair. Moroever, it also shows how the economy works as a whole. In particular, it is argued that productivity is a key determinant of living standards, that an increase in the money supply is a major source of inflation, and that one of the main impediments to avoiding inflation is the existence of an alternative between inflation and unemployment in the short term, that the inflation decrease causes the temporary decline in unemployement and vice versa. The Understanding creatively of all above mentioned issues, we think, will help the reader to develop market economy-appropriate thinking and rational economic-commercial-financial behaviors, to be more competitive in the domestic and international labor markets, and thus to ensure both their own prosperity and the functioning of the country's economy. How he/she copes with the tasks, it is up to the individual reader to decide. At the same time, we will receive all the smart useful advices with a sense of gratitude and will take it into account in the further work. We also would like to thank the editor and reviewers of the books. Finally, there are many things changing, so it is very important to realize that the XXI century has come: 1. The century of the new economy; 2. Age of Knowledge; 3. Age of Information and economic activities are changing in term of innovations. 1. Why is the 21st century the century of the new economy? Because for this period the economic resources, especially non-productive, non-recoverable ones (oil, natural gas, coal, etc.) are becoming increasingly limited. According to the World Energy Council, there are currently 43 years of gas and oil reserves left in the world (see “New Commersant 2007 # 2, p. 16). Under such conditions, sustainable growth of real gross domestic product (GDP) and maximum satisfaction of uncertain needs should be achieved not through the use of more land, labor and capital (extensification), but through more efficient use of available resources (intensification) or innovative economy. And economics, as it was said, is the science of finding the ways about the more effective usage of the limited resources. At the same time, with the sustainable growth and development of the economy, the present needs must be met in a way that does not deprive future generations of the opportunity to meet their needs; 2. Why is the 21st century the age of knowledge? Because in a modern economy, it is not land (natural resources), labor and capital that is crucial, but knowledge. Modern production, its factors and products are not time-consuming and capital-intensive, but science-intensive, knowledge-intensive. The good example of this is a Japanese enterprise (firm) where the production process is going on but people are almost invisible, also, the result of such production (Japanese product) is a miniature or a sample of how to get the maximum result at the lowest cost; 3. Why is the 21st century the age of information? Because the efficient functioning of the modern economy, the effective organization of the material and personal factors of production largely depend on the right governance decision. The right governance decision requires prompt and accurate information. Gone are the days when the main means of transport was a sailing ship, the main form of data processing was pencil and paper, and the main means of transmitting information was sending letters through a postman on horseback. By the modern transport infrastructure (highways, railways, ships, regular domestic and international flights, oil and gas pipelines, etc.), the movement of goods, services and labor resoucres has been significantly accelerated, while through the modern means of communication (mobile phone, internet, other) the information is spreading rapidly globally, which seems to have "shrunk" the world and made it a single large country. The Authors of the book: Ushangi Samadashvili, Doctor of Economic Sciences, Associate Professor of Ivane Javakhishvili Tbilisi State University - Introduction, Chapters - 1, 2, 3, 4, 5, 6, 9, 10, 11,12, 15,16, 17.1,18 , Tests, Revaz Shengelia, Doctor of Economics, Professor of Georgian Technical University, Chapters_7, 8, 13. 14, 17.2, 17.4; Zhuzhuna Tsiklauri - Doctor of Economics, Professor of Georgian Technical University - Chapters 13.6, 13.7,17.2, 17.3, 18. We also thank the editor and reviewers of the book.

Частини книг з теми "Conditional p-Value":

1

Stefanovic, Predrag. "Upgrading the Seismic Safety of the Chritzi Bridge, Switzerland." In Case Studies of Rehabilitation, Repair, Retrofitting, and Strengthening of Structures, 9–20. Zurich, Switzerland: International Association for Bridge and Structural Engineering (IABSE), 2010. http://dx.doi.org/10.2749/sed012.009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p>In the following text, a method of seismic safety improvement of bridges is proposed. It takes into account following requirements: structural security, serviceability, durability, and resistance towards earthquakes under conditions of the cost and value optimization.</p>
2

Snaibi, Wadii, and Abdelhamid Mezrhab. "Livestock Breeders’ Adaptation to Climate Variability and Change in Morocco’s Arid Rangelands." In African Handbook of Climate Change Adaptation, 1853–72. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-45106-6_18.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractSince the mid-1970s, the high plateaus of eastern Morocco have experienced proven trends of climate change (CC) such as a significant decrease in rainfall amounts and an increase in the droughts’ frequency. Consequently, the CC threatens the sustainability of this pastoral ecosystem and negatively affects the breeding of small ruminants, the main local-level livelihood, which becomes more vulnerable due to its high dependence on climatic conditions. This chapter aims to analyze breeders’ adaptation practices by taking into account their social stratification based on the size of the sheep flock in possession. Data were analyzed using descriptive statistics, Kruskal-Wallis and Mann-Whitney tests to examine the differences in the adoption’ frequency of CC adaptation measures according breeders’ classes and Chi-square independence test to identify the factors explaining these observed differences. The analysis of local adaptation practices reveals that they are endogenous but above all curative, aiming at a short-term logic and have a low to medium relevance compared to the specific objective of adaptation to CC. In addition, there are significant differences in the frequency of adoption of CC adaptation strategies (chi-square value = 8.1112, p = 0.017, df = 2) within categories of breeders, in particular between small and larger breeders (U statistic = 58.000, p = 0.008). The significant factors explaining these differences are socioeconomic (age, household size, equipment, training, and membership of a basic professional organization). It is therefore recommended to target small breeders as a priority and to set up support measures (equipment, training, funding, organization of breeders).
3

Bayarri, M. J., and James O. Berger. "Quantifying Surprise in the Data and Model Verification." In Bayesian Statistics 6, 53–82. Oxford University PressOxford, 1999. http://dx.doi.org/10.1093/oso/9780198504856.003.0003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract P-values are often perceived as measurements of the degree of surprise in the data, relative to a hypothesized model. They are also commonly used in model (or hypothesis) verification, i.e., to provide a basis for rejection of a model or hypothesis. We first make a distinction between these two goals: quantifying surprise can be important in deciding whether or not to search for alternative models, but is questionable as the basis for rejection of a model. For measuring surprise, we propose a simple calibration of the p-value which roughly converts a tail area into a Bayes factor or ‘odds’ measure. Many Bayesians have suggested certain modifications of p-values for use in measuring surprise, including the predictive p-value and the posterior predictive p-value. We propose two alternatives, the conditional predictive p-value and the partial posterior predictive p-value, which we argue to be more acceptable from Bayesian (or conditional) reasoning.
4

Woods, Michael, David Wiggins, and Dorothy Edgington. "Theories of Simple Conditionals." In Conditionals, 11–22. Oxford University PressOxford, 1997. http://dx.doi.org/10.1093/oso/9780198751267.003.0002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract If the truth-value (or possibly the absence of truth-value) of Simple Conditionals is fixed solely by the truth-values of their antecedents and consequents, there seem to be only two possibilities. First, if we take the view that ‘If ... then .. .’ sentences always have a truth-value, there is, in fact, no alternative to treating them as material conditionals, if they are truth-functional. This is a consequence of the fact that ‘If P and Q, then Q’ is always true, whatever the truth-values of P and Q are, and that not all ‘If ... then .. .’ statements are true. ‘P and Q’ cannot be true if Q is false, but the other combinations of truth-value assignments to P and ‘P and Q’ are not ruled out.
5

Baker, Daniel H. "Bayesian statistics." In Research Methods Using R. Oxford University Press, 2022. http://dx.doi.org/10.1093/hesc/9780192896599.003.0017.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This chapter focuses on Bayesian statistics and frequentist statistics. It starts with the philosophy of frequentist statistics that highlighted the significance of using a p-value. Bayes' theorem is an equation that governs the combination of conditional probabilities, so the Bayesian methods involve different assumptions and different procedures in an effort to achieve a sensible choice in data analysis. Moreover, Bayesian techniques try to estimate the probabilities of both the experimental and the null hypotheses being true. The chapter then looks at Bayes' theorem and its approach to statistical inference. It explains how to calculate Bayes factors in R using the BayesFactor package.
6

Wiggins, David, and Dorothy Edgington. "Compound Conditionals and Truth-Values." In Conditionals, 58–68. Oxford University PressOxford, 1997. http://dx.doi.org/10.1093/oso/9780198751267.003.0006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract If the conclusions of the last section are correct, that it is misguided to import possible worlds into the analysis of Simple Conditionals, what account should be given of them? What possibilities remain? As we saw in Section 4, we may hold that Simple Conditionals lack truth-values, and that the correct account of them is that they are used to make conditional assertions: one who utters ‘If P then Q’ makes no unconditional assertion using this form of words, but asserts Q on condition that P. If we use Frege’s assertion sign’ ⊢’, an assertion of P will be represented by’ ⊢P’, and an appropriate notation for the conditional assertion theory would be to represent ‘If P then Q’ by ⊢pQ. As we saw in Section 4, the combined effect of Lewis’s result and the acceptance of Adams’s Hypothesis may suggest this as an appropriate treatment of Simple Conditionals. The conditions for an utterance of Q would be that the probability of Q is high relative to the truth of P.
7

Huo, Xingyue, and Joseph Finkelstein. "Pneumococcal Vaccination Lowers the Risk of Alzheimer’s Disease: A Study Utilizing Data from the IBM® MarketScan® Database." In Studies in Health Technology and Informatics. IOS Press, 2024. http://dx.doi.org/10.3233/shti231107.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Previous studies demonstrated an association between influenza vaccination and the likelihood of developing Alzheimer’s disease. This study was aimed at assessing whether pneumococcal vaccinations are associated with a lower risk of Alzheimer’s disease based on analysis of data from the IBM® MarketScan® Database. Vaccinated and unvaccinated matched cohorts were generated using propensity-score matching with the greedy nearest-neighbor matching algorithm. The conditional logistic regression method was used to estimate the relationship between pneumococcal vaccination and the onset of Alzheimer’s disease. There were 142,874 subjects who received the pneumococcal vaccine and 14,392 subjects who did not. The conditional logistic regression indicated that the people who received the pneumococcal vaccine had a significantly lower risk of developing Alzheimer’s disease as compared to the people who did not receive any pneumococcal vaccine (OR=0.37; 95%CI: 0.33-0.42; P-value < .0001). Our findings demonstrated that the pneumococcal vaccine was associated with a 63% reduction in the risk of Alzheimer’s disease among US adults aged 65 and older.
8

Woods, Michael, David Wiggins, and Dorothy Edgington. "Ramsey’s Test and Adams’s Hypothesis." In Conditionals, 23–30. Oxford University PressOxford, 1997. http://dx.doi.org/10.1093/oso/9780198751267.003.0003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Ramsey describes someone deciding whether or not to accept Q, after accepting P, as making minimal revisions in his other beliefs. On what basis is such a hypothetical revision made? In order to answer this question, it will be useful, and more realistic, to take account of the fact that the acceptance or rejection of a belief is not an all-or-nothing matter, and this applies to the acceptance of ‘If P then Q’. In so far as someone has any opinion at all on whether or not P is the case, he will assign to Pa certain probability. Since an individual’s beliefs are systematically related, they involve assignments of probabilities not only to P and Q separately, but also to their joint occurrence. Idealizing a good deal, we may think of an individual’s probability-function as assigning values between zero and one to each of a range of exclusive possible states of the world-alternative possible scenarios-which together exhaust the possibilities, so that the probability-assignments add up to r. We can then distinguish those scenarios in which the antecedent is true and those in which it is not.
9

Pavlou, Antonis, Michalis Doumpos, and Constantin Zopounidis. "The Robustness of Portfolio Optimization Models." In Advances in Finance, Accounting, and Economics, 210–29. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-6114-9.ch008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The optimization of investment portfolios is a topic of major importance in financial decision making, and many relevant models can be found in the literature. These models extend the traditional mean-variance framework using a variety of other risk-return measures. Existing comparative studies have adopted a rather restrictive approach, focusing solely on the minimum risk portfolio without considering the whole set of efficient portfolios, which are also relevant for investors. This chapter focuses on the performance of the whole efficient set. To this end, the authors examine the out-of-sample robustness of efficient portfolios derived by popular optimization models, namely the traditional mean-variance model, mean-absolute deviation, conditional value at risk, and a multi-objective model. Tests are conducted using data for S&P 500 stocks over the period 2005-2016. The results are analyzed through novel performance indicators representing the deviations between historical (estimated) efficient frontiers, actual out-of-sample efficient frontiers, and realized out-of-sample portfolio results.
10

Dubois, Didier, and Henri Prade. "Conditional Objects, Possibility Theory and Default Rules." In Conditionals: from Philosophy to Computer Science, 301–36. Oxford University PressOxford, 1995. http://dx.doi.org/10.1093/oso/9780198538615.003.0010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Conditioning is very often considered in connection with probability; both look strongly entwined in the usual notion of conditional probability. This connection has created a gap between probability theory and logic: while the former seems to ignore material implication when representing conditional knowledge, the latter has no genuine tool to account for conditioning. This chapter is an investigation of the relationship between conditional objects of the form q\p, obtained as a qualitative counterpart to conditional probabilities P(g|p), and non-monotonic reasoning. Viewed as an inference rule, the conditional object possesses properties of a well-behaved non-monotonic consequence relation. The basic tool is the 3-valued semantics of conditional objects that differs from the preferential semantics of Lehmann and colleagues and does not require probabilistic semantics. Semantic entailment of a conditional object q\p from a knowledge base made of conditional objects is equivalent to the inference of the conditional assertion p |˜ q in Lehmann’s system P.

Тези доповідей конференцій з теми "Conditional p-Value":

1

Ka¨hko¨nen, Jukka, and Pentti Varpasuo. "Seismic Fragility Study for High-Pressure Emergency Cooling Water Tanks of Loviisa Nuclear Power Plant." In 18th International Conference on Nuclear Engineering. ASMEDC, 2010. http://dx.doi.org/10.1115/icone18-29181.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The paper studies the fragility of high-pressure emergency cooling water tanks in Loviisa Nuclear Power Plant located on the elevation +25.40 in the reactor building. The seismic fragility is defined as the conditional probability of its failure given a value of the response parameter, such as peak ground acceleration. Using the lognormal-distribution assumption, the fragility (i.e., the probability of failure, f′) at any non-exceedance probability level Q can be derived as Equation1f′=Φ[(ln(a/A¯)+βUΦ−1(Q))/βR] where Q = P(f<f′|a) is the probability that the conditional probability f is less than f′ for a peak ground acceleration a. A is the median ground acceleration capacity, βR is the logarithmic standard deviation representing the randomness about A, and βU is the logarithmic standard deviation representing the uncertainty. The quantity Φ(.) is the standard Gaussian cumulative distribution function. In order to assess the fragility of the tanks the strain time histories for tank supports and piping nozzles were calculated using the joint structural-equipment model. The ground motion response spectra shape used in the structural response analysis has been taken from the YVL 2.6 – guide [1]. This shape represents the envelope spectrum for Southern Finland corresponding to the median annual frequency of 10−5. The sampling of the model properties was carried out with the aid of the Latin hypercube sampling method. In order to find the failure modes the strain time histories were calculated for the piping nozzles and for support structures of the tanks. Since strain is the best measure of energy absorption, energy limited events need to be based on the strain acceptance criteria. The adopted failure limit is the cumulated plastic strain of 8% in the tank sheet metal or in the supporting structures. This failure limit has taken from the reference [2]. The end result of the study is the presentation of the median fragility curve for the tanks as well as 95% and 5% fractile curves.
2

Liepa, Sindija, Dace Butenaite, Jovita Pilecka-Ulcugaceva, and Inga Grinfelde. "Use of isotopes for identification of N2O sources from soils." In Research for Rural Development 2023 : annual 29th international scientific conference proceedings. Latvia University of Life Sciences and Technologies, 2023. http://dx.doi.org/10.22616/rrd.29.2023.034.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Natural processes and human activity play a crucial role in altering the nitrogen cycle and increasing nitrogen oxide (N2 O) emissions. Nitrous oxide isotopes 15N and 18O are important parameters that can help to explain the sources of N2 O gas, as well as their circulation under different soil physical properties. The main goal of the study is to analyze the possibilities of using dinitrogen isotopes 15N and 18O, measured in soil samples, for the identification of N2 O sources. A total of 16 plots were sampled. Each soil sample was assigned a code. Wetting of the samples was carried out to create wet aerobic conditions and wet anaerobic conditions. N2 O measurements were performed in laboratory conditions using the Picarro G5131-i device. The 15Nα and 15Nβ values obtained in the measurement data were used to calculate the δ15NSP and δ15Nbulk values. The obtained δ15NSP and δ15Nbul values were analysed using two methods – descriptive statistics and Kruskal-Wallis test. The test showed that there are statistically significant differences between δ15NSP values (p-value <0.0001), and δ15Nbulk there was no significant difference (p-value 0.885).
3

Ismail, Reem, and Riyadh I. Al-Raoush. "Statistical Analysis of the Effect of Water Table Fluctuation and Soil Layering on the Distribution of BTEX on Soil and Groundwater Under Anaerobic Condition." In The 2nd International Conference on Civil Infrastructure and Construction. Qatar University Press, 2023. http://dx.doi.org/10.29117/cic.2023.0185.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Crude oil, gasoline, and diesel fuel spills pollute groundwater in many coastal areas. BTEX is a hydrocarbon of concern due to its high-water solubility, which allows it to spread widely in the subsurface environment. The mobile phase of LNAPLs percolates through porous soil and accumulates above the water table. Subsurface geological, pollutant morphology, and hydrogeologic site features make natural attenuation difficult to understand. Texture and vertical spatial variability affect soil hydraulic properties and water and contaminant distribution in soil profiles. Changes in rainfall strength and frequency and increased water demand may increase groundwater level oscillations in the next century. Five sets of columns, including one soil column and one equilibrium column, were operated for 150 days. One of the columns was operated under a steady state condition (S), and four columns under transient water table condition. The stable column (S), and the Fluctuating column 1 (F1) contain homogenized soil, while the fluctuating columns 2, 3, and 4 contains heterogenous soil. ORP values at the middle of the columns varied cyclically with WTF. EC values affected greatly by fluctuation and temperature and the statistical test p-value 3.119e-10 < 0.05 implying that there are statistical differences between EC values of these columns. On the other hand, pH for the five columns were fluctuated in the same range (P-value 0.3694 > 0.05). Soil layering affects the attenuation of BTEX, as the peak concentrations for benzene occurred at second imbibition cycle for the homogeneous soil, while for the heterogeneous soil occurred between second and fourth imbibition cycles.
4

Sánchez-Rodríguez, Ana, Erik Rúa, Joaquín Martínez-Sánchez, Mercedes Solla, Belén Riveiro, Pedro Arias, and Henrique Lorenzo. "Latest trends for condition assessment using non-destructive techniques." In IABSE Symposium, Prague 2022: Challenges for Existing and Oncoming Structures. Zurich, Switzerland: International Association for Bridge and Structural Engineering (IABSE), 2022. http://dx.doi.org/10.2749/prague.2022.1292.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p>Bridges are one of the most vulnerable assets within the transportation network. Ageing processes in combination with changing loading conditions make these assets especially vulnerable to structural damage and material degradation. To ensure the optimal operation, appropriate maintenance practices are required, and new techniques and methods facilitating a more accurate diagnostic and safety assessment are being demanded.</p><p>IM-SAFE project aims to fill the gaps in the existing European standards regarding monitoring, maintenance, and safety of transport infrastructure. This paper gathers information about surveying technologies with a focus on optical and radar remote sensing technologies. The final purpose of this article is to support the use of these technologies in the management of bridges and tunnels, and demonstrate the value of their information for the safety assessment of in-service structures.</p>
5

Slabaugh, Carson D., Lucky V. Tran, J. S. Kapat, and Bobby A. Warren. "Heat Transfer and Friction Augmentation in High Aspect Ratio, Ribbed Channels With Dissimilar Inlet Conditions." In 2010 14th International Heat Transfer Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/ihtc14-23219.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This work is an investigation of the heat transfer and pressure-loss characteristics in a rectangular channel with ribs oriented perpendicular to the flow. The novelty of this study lies in the immoderate parameters of the channel geometry and transport enhancing features. Specifically, the aspect ratio (AR) of the rectangular channel is considerably high, varying from fifteen to thirty for the cases reported. Also varied is the rib-pitch to rib-height (p/e), studied at two values; 18.8 and 37.3. Rib-pitch to rib-width (p/w) is held to a value of two for all configurations. Channel Reynolds number is varied between approximately 3,000 and 27,000 for four different tests of each channel configuration. Each channel configuration is studied with two different inlet conditions. The baseline condition consists of a long entrance section leading to the entrance of the channel to provide a hydrodynamically-developed flow at the inlet. The second inlet condition studied consists of a cross-flow supply in a direction perpendicular to the channel axis, oriented in the direction of the channel width (the longer channel dimension). In the second case, the flow rate of the cross-flow supply is varied to understand the effects of a varying momentum flux ratio on the heat transfer and pressure-loss characteristics of the channel. Numerical simulations revealed a strong dependence of the local flow physics on the momentum flux ratio. The turning effect of the flow entering the channel from the cross-flow channel is strongly affected by the pressure gradient across the channel. Strong pressure fields have the ability to propagate farther into the cross-flow channel to ‘pull’ the flow, partially redirecting it before entering the channel and reducing the impingement effect of the flow on the back wall of the channel. Experimental result shows a maximum value of Nusselt number augmentation to be found in the 30:1 AR channel with the aggressive augmenter (p/e = 37.3) and a high momentum flux ratio: Nu/Nuo = 3.15. This design also yielded the friction with f/f0 = 2.6.
6

Croce, Pietro, Paolo Formichi, and Filippo Landi. "Structural safety and design under climate change." In IABSE Congress, New York, New York 2019: The Evolving Metropolis. Zurich, Switzerland: International Association for Bridge and Structural Engineering (IABSE), 2019. http://dx.doi.org/10.2749/newyork.2019.1129.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p>The impact of climate change on climatic actions could significantly affect, in the mid-term future, the design of new structures as well as the reliability of existing ones designed in accordance to the provisions of present and past codes. Indeed, current climatic loads are defined under the assumption of stationary climate conditions but climate is not stationary and the current accelerated rate of changes imposes to consider its effects.</p><p>Increase of greenhouse gas emissions generally induces a global increase of the average temperature, but at local scale, the consequences of this phenomenon could be much more complex and even apparently not coherent with the global trend of main climatic parameters, like for example, temperature, rainfalls, snowfalls and wind velocity.</p><p>In the paper, a general methodology is presented, aiming to evaluate the impact of climate change on structural design, as the result of variations of characteristic values of the most relevant climatic actions over time. The proposed procedure is based on the analysis of an ensemble of climate projections provided according a medium and a high greenhouse gas emission scenario. Factor of change for extreme value distribution’s parameters and return values are thus estimated in subsequent time windows providing guidance for adaptation of the current definition of structural loads.</p><p>The methodology is illustrated together with the outcomes obtained for snow, wind and thermal actions in Italy. Finally, starting from the estimated changes in extreme value parameters, the influence on the long-term structural reliability can be investigated comparing the resulting time dependent reliability with the reference reliability levels adopted in modern Structural codes.</p>
7

Gilo, Mordechai. "Design of a nonpolarizing beam splitter inside a glass cube." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1991. http://dx.doi.org/10.1364/oam.1991.tudd3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The reflectance and transmittance of dielectric thin films at oblique angles of incidence have strong polarization effects. For some uses these effects are undesirable. In this paper a nonpolarizing beamsplitter design concept is shown, based on the fact that in a quarterwave stack at λo, two effective indices that obey the Brewster condition for the p-state affect only the spectral performance of the s-state at λo. When added to a quarterwave stack with a certain p-state transmittance, these layers can change the s-state transmittance to almost any desired value without affecting the p-state. This concept can be applied to a wide range of angles and transmittance values, using effective quarterwave layers of at lease three different materials.
8

Budinski Petković, Ljuba, and Ivana Lončarević. "PERCOLATION ON A TRIANGULAR LATTICE UNDER ANISOTROPIC CONDITIONS." In The 9th Conference on Mathematics in Engineering: Theory and Applications. Faculty of Technical Sciences, University of Novi Sad, 2024. http://dx.doi.org/10.24867/meta.2024.03.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The properties of percolation of objects of various shapes on a two-dimensional triangular lattice is studied by means of Monte Carlo simulations. Depositing objects of various shapes and sizes are made by directed self-avoiding walks on the lattice. Anisotropy is introduced by positing unequal probabilities for orientation of depositing objects along different directions of the lattice. This probability is equal p or (1−p)/2, depending on whether the randomly chosen orientation is horizontal or not, respectively. It is found that the percolation threshold θp increases with the degree of anisotropy, having the maximum values for fully oriented objects.
9

Graver, W. R., W. G. Mayer, and T. Ngoc. "Surface acoustic waves: optical depolarization in noncoplanar conditions." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1988. http://dx.doi.org/10.1364/oam.1988.thm3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Optical depolarization resulting from the interaction with a surface acoustic wave (SAW) has been investigated for conditions where the acoustic beam is not in the optical plane of incidence. Contributions to the scattered optical field include the corrugation of the surface and the elastooptic effect at the surface in the case where the SAW propagates in fused silica and the optical wave propagates in air. The depolarization is investigated as a function of the optical incident angle θfor −90 < θ <90° and as a function of the acoustic out-of-plane angle φ,for 0 < φ <90°. Four initial-to-final polarization states are studied, S to S, P to P, Sto P, and P to S. Scattering cross sections for 0 < K/k <0.1 (where K and k are the acoustical and optical wavenumbers, respectively) are generated and discussed. Functional relationships between the surface corrugation and the elastooptic effect are also evaluated with regard to their contributions to the scattered field. A Jones matrix describing the four polarization states is defined. From this, a Stokes matrix is developed and evaluated with respect to the scattering cross-sectional values.
10

Gorai, Amit Kumar, and Abhishek Kaushik. "Condition Assessment and measures to Repair and Retrofit Baghajatin ROB, Kolkata." In IABSE Congress, New Delhi 2023: Engineering for Sustainable Development. Zurich, Switzerland: International Association for Bridge and Structural Engineering (IABSE), 2023. http://dx.doi.org/10.2749/newdelhi.2023.1521.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p>This paper provides a comprehensive evaluation of Baghajatin ROB, located on the trunk route of the EM bypass of Kolkata Metropolitan, India. The bridge has been subjected to increased traffic volume, aggressive weathering conditions, and lack of proper maintenance over the years, resulting in its gradual deterioration. This paper aims to highlight the existing state of the ROB and discusses the methodology adopted to repair and retrofit. It includes the current progress of the repair and retrofitting at the site and the challenges involved in construction. This study provides value to engineers, policymakers, and infrastructure stakeholders seeking to develop effective strategies for maintaining and upgrading crucial infrastructure.</p>

Звіти організацій з теми "Conditional p-Value":

1

Luo, Minjing, Yilin Li, Yingqiao Wang, Jinghan Huang, Zhihan Liu, Yicheng Gao, Qianyun Chai, Yuting Feng, Jianping Liu, and Yutong Fei. The Fragility of Statistically Significant Findings from Depression Randomized Controlled Trials. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, April 2023. http://dx.doi.org/10.37766/inplasy2023.4.0086.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Review question / Objective: The Fragility of Statistically Significant Findings from Depression Randomized Controlled Trials. Condition being studied: Depression is a mental disorder characterized by a range of symptoms, including loss of memory and sleep, decreased energy, feelings of guilt or low mood, disturbed appetite, poor concentration, and an increased risk of suicide. According to a systematic analysis of the Global Burden of Disease Study 2019, depression is recognized as the leading cause of disease burden for mental disorders, accounting for the largest proportion of disability-adjusted life years (DALYs) at 37.3%. The fragility index (FI), which is the minimum number of changes from events to non-events resulting in loss of statistical significance, has been suggested as a means to aid the interpretation of trial results, as the potential inadequacy about robustness of threshold P-value as a tool for reporting binary outcomes in clinical trials. In this systematic review, we want to calculate the FI of randomized controlled trials (RCTs) in depression.
2

Савосько, Василь Миколайович, Юлія Віліївна Бєлик, Юрій Васильович Лихолат, Герман Хайльмейер, and Іван Панасович Григорюк. Macronutrients and Heavy Metals Contents in the Leaves of Trees from the Devastated Lands at Kryvyi Rih District (Central Ukraine). КДПУ, 2020. http://dx.doi.org/10.31812/123456789/4151.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The relevance of these studies was due to the need to clarify the biogeochemical characteristics of woody plant species that grow naturally on devastated lands. The object of this paper: to carry out a comparative analysis of macro nutrients and heavy metals contents in the leaves of trees spontaneously sprouting on the devastated lands at the Kryvyi Rih District. This research was performed at Petrovsky waste rock dump, the Central part of the Kryvyi Rih iron-ore & metallurgical district (Dnipropetrovsk region, Ukraine). The macronutrients (K, Ca, Mg, P and S) and heavy metals (Fe, Mn, Zn, Cu, Pb and Cd) contents in the leaves of three species of the trees (Ash-leaved Maple Acer negundo L., Silver Birch Betula pendula Roth. and Black Locust Robinia pseudoacacia L.) that were collected on devastated lands were assessed. It was established that trees which grow on the Petrovsky dump take place under evident shortage of nutrients (especially K and P) and excess of metals (especially Fe, Mn and Zn). Taking into account the revealed values of macronutrients optimal concentrations and revealed the heavy metals lowest content in the leaves, we assume that Ash-leaved maple and Black locust (compared to the Silver Birch) are more resistant to the geochemical conditions of devastated lands.
3

Schwartz, Bertha, Vaclav Vetvicka, Ofer Danai, and Yitzhak Hadar. Increasing the value of mushrooms as functional foods: induction of alpha and beta glucan content via novel cultivation methods. United States Department of Agriculture, January 2015. http://dx.doi.org/10.32747/2015.7600033.bard.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
During the granting period, we performed the following projects: Firstly, we differentially measured glucan content in several pleurotus mushroom strains. Mushroom polysaccharides are edible polymers that have numerous reported biological functions; the most common effects are attributed to β-glucans. In recent years, it became apparent that the less abundant α-glucans also possess potent effects in various health conditions. In our first study, we explored several Pleurotus species for their total, β and α-glucan content. Pleurotuseryngii was found to have the highest total glucan concentrations and the highest α-glucans proportion. We also found that the stalks (stipe) of the fruit body contained higher glucan content then the caps (pileus). Since mushrooms respond markedly to changes in environmental and growth conditions, we developed cultivation methods aiming to increase the levels of α and β-glucans. Using olive mill solid waste (OMSW) from three-phase olive mills in the cultivation substrate. We were able to enrich the levels mainly of α-glucans. Maximal total glucan concentrations were enhanced up to twice when the growth substrate contained 80% of OMSW compared to no OMSW. Taking together this study demonstrate that Pleurotuseryngii can serve as a potential rich source of glucans for nutritional and medicinal applications and that glucan content in mushroom fruiting bodies can be further enriched by applying OMSW into the cultivation substrate. We then compared the immune-modulating activity of glucans extracted from P. ostreatus and P. eryngii on phagocytosis of peripheral blood neutrophils, and superoxide release from HL-60 cells. The results suggest that the anti-inflammatory properties of these glucans are partially mediated through modulation of neutrophileffector functions (P. eryngiiwas more effective). Additionally, both glucans dose-dependently competed for the anti-Dectin-1 and anti-CR3 antibody binding. We then tested the putative anti-inflammatory effects of the extracted glucans in inflammatory bowel disease (IBD) using the dextran sulfate sodium (DSS)–induced model in mice. The clinical symptoms of IBD were efficiently relieved by the treatment with two different doses of the glucan from both fungi. Glucan fractions, from either P. ostreatus or P. eryngii, markedly prevented TNF-α mediated inflammation in the DSS–induced inflamed intestine. These results suggest that there are variations in glucan preparations from different fungi in their anti-inflammatory ability. In our next study, we tested the effect of glucans on lipopolysaccharide (LPS)-induced production of TNF-α. We demonstrated that glucan extracts are more effective than mill mushroom preparations. Additionally, the effectiveness of stalk-derived glucans were slightly more pronounced than of caps. Cap and stalk glucans from mill or isolated glucan competed dose-dependently with anti-Dectin-and anti-CR-3 antibodies, indicating that they contain β-glucans recognized by these receptors. Using the dextran sulfate sodium (DSS)-inflammatory bowel disease mice model, intestinal inflammatory response to the mill preparations was measured and compared to extracted glucan fractions from caps and stalks. We found that mill and glucan extracts were very effective in downregulatingIFN-γ and MIP-2 levels and that stalk-derived preparations were more effective than from caps. The tested glucans were equally effective in regulating the number of CD14/CD16 monocytes and upregulating the levels of fecal-released IgA to almost normal levels. In conclusion, the most effective glucans in ameliorating some IBD-inflammatory associated symptoms induced by DSS treatment in mice were glucan extracts prepared from the stalk of P. eryngii. These spatial distinctions may be helpful in selecting more effective specific anti-inflammatory mushrooms-derived glucans. We additionally tested the effect of glucans on lipopolysaccharide-induced production of TNF-α, which demonstrated stalk-derived glucans were more effective than of caps-derived glucans. Isolated glucans competed with anti-Dectin-1 and anti-CR3 antibodies, indicating that they contain β-glucans recognized by these receptors. In conclusion, the most effective glucans in ameliorating IBD-associated symptoms induced by DSS treatment in mice were glucan extracts prepared from the stalk of P. eryngii grown at higher concentrations of OMSW. We conclude that these stress-induced growing conditions may be helpful in selecting more effective glucans derived from edible mushrooms. Based on the findings that we could enhance glucan content in Pleurotuseryngii following cultivation of the mushrooms on a substrate containing different concentrations of olive mill solid waste (OMSW) and that these changes are directly related to the content of OMSW in the growing substrate we tested the extracted glucans in several models. Using dextran sulfate sodium (DSS)–inflammatory bowel disease (IBD) mice model, we measured the colonic inflammatory response to the different glucan preparations. We found that the histology damaging score (HDS) resulting from DSS treatment reach a value of 11.8 ± 2.3 were efficiently downregulated by treatment with the fungal extracted glucans, glucans extracted from stalks cultivated at 20% OMSWdownregulated to a HDS value of 6.4 ± 0.5 and at 80% OMSW showed the strongest effects (5.5 ± 0.6). Similar downregulatory effects were obtained for expression of various intestinal cytokines. All tested glucans were equally effective in regulating the number of CD14/CD16 monocytes from 18.2 ± 2.7 % for DSS to 6.4 ± 2.0 for DSS +glucans extracted from stalks cultivated at 50% OMSW. We finally tested glucans extracted from Pleurotuseryngii grown on a substrate containing increasing concentrations of olive mill solid waste (OMSW) contain greater glucan concentrations as a function of OMSW content. Treatment of rat Intestinal epithelial cells (IEC-6) transiently transfected with Nf-κB fused to luciferase demonstrated that glucans extracted from P. eryngii stalks grown on 80% OMSWdownregulatedTNF-α activation. Glucans from mushrooms grown on 80% OMSW exerted the most significant reducing activity of nitric oxide production in lipopolysaccharide (LPS) treated J774A.1 murine macrophages. The isolated glucans were tested in vivo using the Dextran Sodium Sulfate (DSS) induced colitis in C57Bl/6 mice and found to reduce the histology damaging score resulting from DSS treatment. Expression of various intestinal cytokines were efficiently downregulated by treatment with the fungal extracted glucans. We conclude that the stress-induced growing conditions exerted by OMSW induces production of more effective anti-inflammatory glucans in P. eryngii stalks.
4

Brosh, Arieh, David Robertshaw, Yoav Aharoni, Zvi Holzer, Mario Gutman, and Amichai Arieli. Estimation of Energy Expenditure of Free Living and Growing Domesticated Ruminants by Heart Rate Measurement. United States Department of Agriculture, April 2002. http://dx.doi.org/10.32747/2002.7580685.bard.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Research objectives were: 1) To study the effect of diet energy density, level of exercise, thermal conditions and reproductive state on cardiovascular function as it relates to oxygen (O2) mobilization. 2) To validate the use of heart rate (HR) to predict energy expenditure (EE) of ruminants, by measuring and calculating the energy balance components at different productive and reproductive states. 3) To validate the use of HR to identify changes in the metabolizable energy (ME) and ME intake (MEI) of grazing ruminants. Background: The development of an effective method for the measurement of EE is essential for understanding the management of both grazing and confined feedlot animals. The use of HR as a method of estimating EE in free-ranging large ruminants has been limited by the availability of suitable field monitoring equipment and by the absence of empirical understanding of the relationship between cardiac function and metabolic rate. Recent developments in microelectronics provide a good opportunity to use small HR devices to monitor free-range animals. The estimation of O2 uptake (VO2) of animals from their HR has to be based upon a consistent relationship between HR and VO2. The question as to whether, or to what extent, feeding level, environmental conditions and reproductive state affect such a relationship is still unanswered. Studies on the basic physiology of O2 mobilization (in USA) and field and feedlot-based investigations (in Israel) covered a , variety of conditions in order to investigate the possibilities of using HR to estimate EE. In USA the physiological studies conducted using animals with implanted flow probes, show that: I) although stroke volume decreases during intense exercise, VO2 per one heart beat per kgBW0.75 (O2 Pulse, O2P) actually increases and measurement of EE by HR and constant O2P may underestimate VO2unless the slope of the regression relating to heart rate and VO2 is also determined, 2) alterations in VO2 associated with the level of feeding and the effects of feeding itself have no effect on O2P, 3) both pregnancy and lactation may increase blood volume, especially lactation; but they have no effect on O2P, 4) ambient temperature in the range of 15 to 25°C in the resting animal has no effect on O2P, and 5) severe heat stress, induced by exercise, elevates body temperature to a sufficient extent that 14% of cardiac output may be required to dissipate the heat generated by exercise rather than for O2 transport. However, this is an unusual situation and its affect on EE estimation in a freely grazing animal, especially when heart rate is monitored over several days, is minor. In Israel three experiments were carried out in the hot summer to define changes in O2P attributable to changes in the time of day or In the heat load. The animals used were lambs and young calves in the growing phase and highly yielding dairy cows. In the growing animals the time of day, or the heat load, affected HR and VO2, but had no effect on O2P. On the other hand, the O2P measured in lactating cows was affected by the heat load; this is similar to the finding in the USA study of sheep. Energy balance trials were conducted to compare MEI recovery by the retained energy (RE) and by EE as measured by HR and O2P. The trial hypothesis was that if HR reliably estimated EE, the MEI proportion to (EE+RE) would not be significantly different from 1.0. Beef cows along a year of their reproductive cycle and growing lambs were used. The MEI recoveries of both trials were not significantly different from 1.0, 1.062+0.026 and 0.957+0.024 respectively. The cows' reproductive state did not affect the O2P, which is similar to the finding in the USA study. Pasture ME content and animal variables such as HR, VO2, O2P and EE of cows on grazing and in confinement were measured throughout three years under twenty-nine combinations of herbage quality and cows' reproductive state. In twelve grazing states, individual faecal output (FO) was measured and MEI was calculated. Regression analyses of the EE and RE dependent on MEI were highly significant (P<0.001). The predicted values of EE at zero intake (78 kcal/kgBW0.75), were similar to those estimated by NRC (1984). The EE at maintenance condition of the grazing cows (EE=MEI, 125 kcal/kgBW0.75) which are in the range of 96.1 to 125.5 as presented by NRC (1996 pp 6-7) for beef cows. Average daily HR and EE were significantly increased by lactation, P<0.001 and P<0.02 respectively. Grazing ME significantly increased HR and EE, P<0.001 and P<0.00l respectively. In contradiction to the finding in confined ewes and cows, the O2P of the grazing cows was significantly affected by the combined treatments (P<0.00l ); this effect was significantly related to the diet ME (P<0.00l ) and consequently to the MEI (P<0.03). Grazing significantly increased O2P compared to confinement. So, when EE of grazing animals during a certain season of the year is estimated using the HR method, the O2P must be re measured whenever grazing ME changes. A high correlation (R2>0.96) of group average EE and of HR dependency on MEI was also found in confined cows, which were fed six different diets and in growing lambs on three diets. In conclusion, the studies conducted in USA and in Israel investigated in depth the physiological mechanisms of cardiovascular and O2 mobilization, and went on to investigate a wide variety of ruminant species, ages, reproductive states, diets ME, time of intake and time of day, and compared these variables under grazing and confinement conditions. From these combined studies we can conclude that EE can be determined from HR measurements during several days, multiplied by O2P measured over a short period of time (10-15 min). The study showed that RE could be determined during the growing phase without slaughtering. In the near future the development microelectronic devices will enable wide use of the HR method to determine EE and energy balance. It will open new scopes of physiological and agricultural research with minimizes strain on animals. The method also has a high potential as a tool for herd management.
5

Roschelle, Jeremy, Britte Haugan Cheng, Nicola Hodkowski, Julie Neisler, and Lina Haldar. Evaluation of an Online Tutoring Program in Elementary Mathematics. Digital Promise, April 2020. http://dx.doi.org/10.51388/20.500.12265/94.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Many students struggle with mathematics in late elementary school, particularly on the topic of fractions. In a best evidence syntheses of research on increasing achievement in elementary school mathematics, Pelligrini et al. (2018) highlighted tutoring as a way to help students. Online tutoring is attractive because costs may be lower and logistics easier than with face-to-face tutoring. Cignition developed an approach that combines online 1:1 tutoring with a fractions game, called FogStone Isle. The game provides students with additional learning opportunities and provides tutors with information that they can use to plan tutoring sessions. A randomized controlled trial investigated the research question: Do students who participate in online tutoring and a related mathematical game learn more about fractions than students who only have access to the game? Participants were 144 students from four schools, all serving low-income students with low prior mathematics achievement. In the Treatment condition, students received 20-25 minute tutoring sessions twice per week for an average of 18 sessions and also played the FogStone Isle game. In the Control condition, students had access to the game, but did not play it often. Control students did not receive tutoring. Students were randomly assigned to condition after being matched on pre-test scores. The same diagnostic assessment was used as a pre-test and as a post-test. The planned analysis looked for differences in gain scores ( post-test minus pre-test scores) between conditions. We conducted a t-test on the aggregate gain scores, comparing conditions; the results were statistically significant (t = 4.0545, df = 132.66, p-value < .001). To determine an effect size, we treated each site as a study in a meta-analysis. Using gain scores, the effect size was g=+.66. A more sophisticated treatment of the pooled standard deviation resulted in a corrected effect size of g=.46 with a 95% confidence interval of [+.23,+.70]. Students who received online tutoring and played the related Fog Stone Isle game learned more; our research found the approach to be efficacious. The Pelligrini et al. (2018) meta-analysis of elementary math tutoring programs found g = .26 and was based largely on face-to-face tutoring studies. Thus, this study compares favorably to prior research on face-to-face mathematics tutoring with elementary students. Limitations are discussed; in particular, this is an initial study of an intervention under development. Effects could increase or decrease as development continues and the program scales. Although this study was planned long before the current pandemic, results are particularly timely now that many students are at home under shelter-in-place orders due to COVID-19. The approach taken here is feasible for students at home, with tutors supporting them from a distance. It is also feasible in many other situations where equity could be addressed directly by supporting students via online tutors.
6

Welch, David, and Gregory Deierlein. Technical Background Report for Structural Analysis and Performance Assessment (PEER-CEA Project). Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA, November 2020. http://dx.doi.org/10.55461/yyqh3072.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This report outlines the development of earthquake damage functions and comparative loss metrics for single-family wood-frame buildings with and without seismic retrofit of vulnerable cripple wall and stem wall conditions. The underlying goal of the study is to quantify the benefits of the seismic retrofit in terms of reduced earthquake damage and repair or reconstruction costs. The earthquake damage and economic losses are evaluated based on the FEMA P-58 methodology, which incorporates detailed building information and analyses to characterize the seismic hazard, structural response, earthquake damage, and repair/reconstruction costs. The analyses are informed by and include information from other working groups of the Project to: (1) summarize past research on performance of wood-frame houses; (2) identify construction features to characterize alternative variants of wood-frame houses; (3) characterize earthquake hazard and ground motions in California; (4) conduct laboratory tests of cripple wall panels, wood-frame wall subassemblies and sill anchorages; and (5) validate the component loss models with data from insurance claims adjustors. Damage functions are developed for a set of wood-frame building variants that are distinguished by the number of stories (one- versus two-story), era (age) of construction, interior wall and ceiling materials, exterior cladding material, and height of the cripple walls. The variant houses are evaluated using seismic hazard information and ground motions for several California locations, which were chosen to represent the range seismicity conditions and retrofit design classifications outlined in the FEMA P-1100 guidelines for seismic retrofit. The resulting loss models for the Index Building variants are expressed in terms of three outputs: Mean Loss Curves (damage functions), relating expected loss (repair cost) to ground-motion shaking intensity, Expected Annual Loss, describing the expected (mean) loss at a specific building location due to the risk of earthquake damage, calculated on an annualized basis, and Expected RC250 Loss, which is the cost of repairing damage due to earthquake ground shaking with a return period of 250 years (20% chance of exceedance in 50 years). The loss curves demonstrate the effect of seismic retrofit by comparing losses in the existing (unretrofitted) and retrofitted condition across a range of seismic intensities. The general findings and observations demonstrate: (1) cripple walls in houses with exterior wood siding are more vulnerable than ones with stucco siding to collapse and damage; (2) older pre-1945 houses with plaster on wood lath interior walls are more susceptible to damage and losses than more recent houses with gypsum wallboard interiors; (3) two-story houses are more vulnerable than one-story houses; (4) taller (e.g., 6-ft-tall) cripple walls are generally less vulnerable to damage and collapse than shorter (e.g., 2-ft-tall) cripple walls; (5) houses with deficient stem wall connections are generally observed to be less vulnerable to earthquake damage than equivalent unretrofitted cripple walls with the same superstructure; and (6) the overall risk of losses and the benefits of cripple wall retrofit are larger for sites with higher seismicity. As summarized in the report, seismic retrofit of unbraced cripple walls can significantly reduce the risk of earthquake damage and repair costs, with reductions in Expected RC250 Loss risk of up to 50% of the house replacement value for an older house with wood-frame siding at locations of high seismicity. In addition to the reduction in repair cost risk, the seismic retrofit has an important additional benefit to reduce the risk of major damage that can displace residents from their house for many months.
7

Laxmi Prasanna, Porandla, B. Anil kumar, and Macha Sahithi. A STUDY TO EVALUATE THE TEAR FILM CHANGES IN PATIENTS WITH PTERYGIUM. World Wide Journals, February 2023. http://dx.doi.org/10.36106/ijar/3408221.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Introduction: Pterygium is a degenerative condition of the subconjunctival tissues which proliferate as vascularized granulation tissue to invade the cornea, destroying the supercial layers of the stroma and bowmans membrane, the whole being covered by conjunctival epithelium.The tear lm consists of three layers, the most supercial layer of tear lm is lipid layer produced by meibomian glands. The middle layer is the aqueous layer produced by the main lacrimal gland as well as accessory lacrimal glands of Krause and Wolfring. Aqueous layer constitutes over 90% of the tear lm. The layer closest to the cornea is the mucin layer produced by conjunctival goblet cells. Tear function abnormalities have been proposed as an etiologic factor for pterygium due to observation that a pterygium is exacerbated by dryness and dellen formation. Whether tear dysfunction is a precursor to pterygium growth or pterygium causes tear dysfunction is still not clear. The present study was taken up to study the tear lm changes in patients presenting with pterygium. Materials and methods: The present prospective study was conducted at the Department of Ophthalmology, Chalmeda Anand Rao Institute of Medical Sciences from Jan 2021- July 2022. 75 patients satisfying inclusion and exclusion criteria were included in the study. The eye with pterygium was considered as case and the normal eye of the same patient was considered as controls. The data was recorded for 150 eyes. All patients underwent visual acuity assessment, a detailed slit-lamp examination and ophthalmoscopy to rule out adnexal, anterior segment and posterior segment diseases. Patients were evaluated for tear lm changes using Schirmer's test(with anesthesia),Tear lm breakup time and Tear lm meniscus height. The mean age of the study population was 34.7±4.98 years, with 56% of ma Results: les and 44% of females. Pterygium was present in right eye in 73.33% (n=55) cases and 26.66% (n=20) had it in the left eye. All were on the nasal conjunctiva. Schirmer's test was signicantly lower in eyes with pterygium with P value of <0.001. Tear Film Break Up time and Tear Film meniscus height was signicantly lesser in the eyes with pterygium with P=<0.001. From the present study, we ca Conclusion: n suggest that unstable tear lm is found to a greater extent in eyes with pterygium than in eyes without pterygium. Pterygium is one of the most common ocular surface disorders which results in instability of tear lm indices and thus lead to dysfunctional tear lm and development of dry eye.
8

Israel, Alvaro, and John Merrill. Production of Seed Stocks for Sustainable Tank Cultivation of the Red Edible Seaweed Porphyra. United States Department of Agriculture, 2006. http://dx.doi.org/10.32747/2006.7696527.bard.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Porphyra species (commonly known as ‘nori’ or ‘purple laver’) are edible red seaweeds rich in proteins, vitamins and other highly valued biogenic compounds. For years Porphyra has been cultured using seeded nets extended in the open sea, and its biomass consumed primarily in the Far East. While demands for international markets have increased steadily at an average of 20% per year, supplies are on the verge and not expected to meet future demands. Alternatively, land-based cultivation of seaweed has become attractive in the mariculture industry since (1) important growth parameters can be controlled, (2) is environmentally friendly and (3) perfectly matches with integrated aquaculture leading to sustainable, high quality products. During the last few years a tank cultivation technology for Porphyra has been developed at the Israeli institution. This technology is based on indoor production of asexual spores and their subsequent growth to 1-2 mm seedlings. The seedlings are then transferred to outdoor tanks and ponds when seawater temperatures drop to 20 °C, or below, and days become shorter during winter time. However, the current technology efficiently serves only about 100 m2 of ponds during one growth season. In order to produce seedlings in sufficient amounts, it is critical to address both technical and biological aspects of seedling production, securing optimal up-scale to commercial-size cultivation farms. We hypothesize that massive production of spores is related to thalli origin, thalli age and sporulation triggers, and that seedling survival and their subsequent growth potential is determined by the seawater quality and overall indoor growth conditions imposed. A series of bio-reactors were constructed and tested in which spore release and spore growth were separately studied. The main assessment criteria for optimal viability of the seedlings will be by determining their electron transport rate using PAM fluorometry and by subsequent growth and biomass yields in outdoor ponds. Altogether the project showed (1), controlled sporulation is possible in big outdoor/growth chamber settings provided initial stock material (small frozen seedlings) is at hand, (2), contamination problems can be almost completely avoided if stock material is properly handled (clean as possible and partially dehydrated prior to freezing), (3), spore release can significantly be enhance using high nutrient levels during thawing for P. yezoensis and P. haitanensis, but not for P. rosengurttii, (4), PAM fluorometry is an efficient tool to estimate growth capacity in both seedlings and juvenile thalli. The BARD funding also served to explore other aspects of Porphyra biology and cultivation. For example, the taxonomical status of Porphyra strains used in this study was defined (see appendix), and the potential use of this seaweed in bioremediation was well substantiated. In addition, BARD funding supported a number of opportunities and activities in the Israeli lab, direct or indirectly related to the initial objectives of the project such as: additional molecular work in other seaweeds, description of at least 2 new species for the Israeli Mediterranean, and continuous support for the writing of a book on Global Change and applied aspects of seaweeds. The technology for Porphyra cultivation in land-based ponds is readily available. This study corroborated previous know-how of Porphyra growth in tanks and ponds, and yet offers important improvements regarding seedling production and their handling for successful cultivation. This study supported various other activities opening additional important issues in the biology/cultivation/use of Porphyra and other seaweeds.
9

Rankin, Nicole, Deborah McGregor, Candice Donnelly, Bethany Van Dort, Richard De Abreu Lourenco, Anne Cust, and Emily Stone. Lung cancer screening using low-dose computed tomography for high risk populations: Investigating effectiveness and screening program implementation considerations: An Evidence Check rapid review brokered by the Sax Institute (www.saxinstitute.org.au) for the Cancer Institute NSW. The Sax Institute, October 2019. http://dx.doi.org/10.57022/clzt5093.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background Lung cancer is the number one cause of cancer death worldwide.(1) It is the fifth most commonly diagnosed cancer in Australia (12,741 cases diagnosed in 2018) and the leading cause of cancer death.(2) The number of years of potential life lost to lung cancer in Australia is estimated to be 58,450, similar to that of colorectal and breast cancer combined.(3) While tobacco control strategies are most effective for disease prevention in the general population, early detection via low dose computed tomography (LDCT) screening in high-risk populations is a viable option for detecting asymptomatic disease in current (13%) and former (24%) Australian smokers.(4) The purpose of this Evidence Check review is to identify and analyse existing and emerging evidence for LDCT lung cancer screening in high-risk individuals to guide future program and policy planning. Evidence Check questions This review aimed to address the following questions: 1. What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? 2. What is the evidence of potential harms from lung cancer screening for higher-risk individuals? 3. What are the main components of recent major lung cancer screening programs or trials? 4. What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Summary of methods The authors searched the peer-reviewed literature across three databases (MEDLINE, PsycINFO and Embase) for existing systematic reviews and original studies published between 1 January 2009 and 8 August 2019. Fifteen systematic reviews (of which 8 were contemporary) and 64 original publications met the inclusion criteria set across the four questions. Key findings Question 1: What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? There is sufficient evidence from systematic reviews and meta-analyses of combined (pooled) data from screening trials (of high-risk individuals) to indicate that LDCT examination is clinically effective in reducing lung cancer mortality. In 2011, the landmark National Lung Cancer Screening Trial (NLST, a large-scale randomised controlled trial [RCT] conducted in the US) reported a 20% (95% CI 6.8% – 26.7%; P=0.004) relative reduction in mortality among long-term heavy smokers over three rounds of annual screening. High-risk eligibility criteria was defined as people aged 55–74 years with a smoking history of ≥30 pack-years (years in which a smoker has consumed 20-plus cigarettes each day) and, for former smokers, ≥30 pack-years and have quit within the past 15 years.(5) All-cause mortality was reduced by 6.7% (95% CI, 1.2% – 13.6%; P=0.02). Initial data from the second landmark RCT, the NEderlands-Leuvens Longkanker Screenings ONderzoek (known as the NELSON trial), have found an even greater reduction of 26% (95% CI, 9% – 41%) in lung cancer mortality, with full trial results yet to be published.(6, 7) Pooled analyses, including several smaller-scale European LDCT screening trials insufficiently powered in their own right, collectively demonstrate a statistically significant reduction in lung cancer mortality (RR 0.82, 95% CI 0.73–0.91).(8) Despite the reduction in all-cause mortality found in the NLST, pooled analyses of seven trials found no statistically significant difference in all-cause mortality (RR 0.95, 95% CI 0.90–1.00).(8) However, cancer-specific mortality is currently the most relevant outcome in cancer screening trials. These seven trials demonstrated a significantly greater proportion of early stage cancers in LDCT groups compared with controls (RR 2.08, 95% CI 1.43–3.03). Thus, when considering results across mortality outcomes and early stage cancers diagnosed, LDCT screening is considered to be clinically effective. Question 2: What is the evidence of potential harms from lung cancer screening for higher-risk individuals? The harms of LDCT lung cancer screening include false positive tests and the consequences of unnecessary invasive follow-up procedures for conditions that are eventually diagnosed as benign. While LDCT screening leads to an increased frequency of invasive procedures, it does not result in greater mortality soon after an invasive procedure (in trial settings when compared with the control arm).(8) Overdiagnosis, exposure to radiation, psychological distress and an impact on quality of life are other known harms. Systematic review evidence indicates the benefits of LDCT screening are likely to outweigh the harms. The potential harms are likely to be reduced as refinements are made to LDCT screening protocols through: i) the application of risk predication models (e.g. the PLCOm2012), which enable a more accurate selection of the high-risk population through the use of specific criteria (beyond age and smoking history); ii) the use of nodule management algorithms (e.g. Lung-RADS, PanCan), which assist in the diagnostic evaluation of screen-detected nodules and cancers (e.g. more precise volumetric assessment of nodules); and, iii) more judicious selection of patients for invasive procedures. Recent evidence suggests a positive LDCT result may transiently increase psychological distress but does not have long-term adverse effects on psychological distress or health-related quality of life (HRQoL). With regards to smoking cessation, there is no evidence to suggest screening participation invokes a false sense of assurance in smokers, nor a reduction in motivation to quit. The NELSON and Danish trials found no difference in smoking cessation rates between LDCT screening and control groups. Higher net cessation rates, compared with general population, suggest those who participate in screening trials may already be motivated to quit. Question 3: What are the main components of recent major lung cancer screening programs or trials? There are no systematic reviews that capture the main components of recent major lung cancer screening trials and programs. We extracted evidence from original studies and clinical guidance documents and organised this into key groups to form a concise set of components for potential implementation of a national lung cancer screening program in Australia: 1. Identifying the high-risk population: recruitment, eligibility, selection and referral 2. Educating the public, people at high risk and healthcare providers; this includes creating awareness of lung cancer, the benefits and harms of LDCT screening, and shared decision-making 3. Components necessary for health services to deliver a screening program: a. Planning phase: e.g. human resources to coordinate the program, electronic data systems that integrate medical records information and link to an established national registry b. Implementation phase: e.g. human and technological resources required to conduct LDCT examinations, interpretation of reports and communication of results to participants c. Monitoring and evaluation phase: e.g. monitoring outcomes across patients, radiological reporting, compliance with established standards and a quality assurance program 4. Data reporting and research, e.g. audit and feedback to multidisciplinary teams, reporting outcomes to enhance international research into LDCT screening 5. Incorporation of smoking cessation interventions, e.g. specific programs designed for LDCT screening or referral to existing community or hospital-based services that deliver cessation interventions. Most original studies are single-institution evaluations that contain descriptive data about the processes required to establish and implement a high-risk population-based screening program. Across all studies there is a consistent message as to the challenges and complexities of establishing LDCT screening programs to attract people at high risk who will receive the greatest benefits from participation. With regards to smoking cessation, evidence from one systematic review indicates the optimal strategy for incorporating smoking cessation interventions into a LDCT screening program is unclear. There is widespread agreement that LDCT screening attendance presents a ‘teachable moment’ for cessation advice, especially among those people who receive a positive scan result. Smoking cessation is an area of significant research investment; for instance, eight US-based clinical trials are now underway that aim to address how best to design and deliver cessation programs within large-scale LDCT screening programs.(9) Question 4: What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Assessing the value or cost-effectiveness of LDCT screening involves a complex interplay of factors including data on effectiveness and costs, and institutional context. A key input is data about the effectiveness of potential and current screening programs with respect to case detection, and the likely outcomes of treating those cases sooner (in the presence of LDCT screening) as opposed to later (in the absence of LDCT screening). Evidence about the cost-effectiveness of LDCT screening programs has been summarised in two systematic reviews. We identified a further 13 studies—five modelling studies, one discrete choice experiment and seven articles—that used a variety of methods to assess cost-effectiveness. Three modelling studies indicated LDCT screening was cost-effective in the settings of the US and Europe. Two studies—one from Australia and one from New Zealand—reported LDCT screening would not be cost-effective using NLST-like protocols. We anticipate that, following the full publication of the NELSON trial, cost-effectiveness studies will likely be updated with new data that reduce uncertainty about factors that influence modelling outcomes, including the findings of indeterminate nodules. Gaps in the evidence There is a large and accessible body of evidence as to the effectiveness (Q1) and harms (Q2) of LDCT screening for lung cancer. Nevertheless, there are significant gaps in the evidence about the program components that are required to implement an effective LDCT screening program (Q3). Questions about LDCT screening acceptability and feasibility were not explicitly included in the scope. However, as the evidence is based primarily on US programs and UK pilot studies, the relevance to the local setting requires careful consideration. The Queensland Lung Cancer Screening Study provides feasibility data about clinical aspects of LDCT screening but little about program design. The International Lung Screening Trial is still in the recruitment phase and findings are not yet available for inclusion in this Evidence Check. The Australian Population Based Screening Framework was developed to “inform decision-makers on the key issues to be considered when assessing potential screening programs in Australia”.(10) As the Framework is specific to population-based, rather than high-risk, screening programs, there is a lack of clarity about transferability of criteria. However, the Framework criteria do stipulate that a screening program must be acceptable to “important subgroups such as target participants who are from culturally and linguistically diverse backgrounds, Aboriginal and Torres Strait Islander people, people from disadvantaged groups and people with a disability”.(10) An extensive search of the literature highlighted that there is very little information about the acceptability of LDCT screening to these population groups in Australia. Yet they are part of the high-risk population.(10) There are also considerable gaps in the evidence about the cost-effectiveness of LDCT screening in different settings, including Australia. The evidence base in this area is rapidly evolving and is likely to include new data from the NELSON trial and incorporate data about the costs of targeted- and immuno-therapies as these treatments become more widely available in Australia.

До бібліографії