Добірка наукової літератури з теми "Empirical p-Value"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Empirical p-Value".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Empirical p-Value":

1

Butler, J. S., and Peter Jones. "Theoretical and empirical distributions of the p value." METRON 76, no. 1 (December 11, 2017): 1–30. http://dx.doi.org/10.1007/s40300-017-0130-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Hanifah, Risti Ulfi, Eviatiwi Kusumaningtyas Sugiyanto, and Dian Triyani. "INDUSTRY VALUE: EMPIRICAL STUDY OF FACTORS AFFECTING." Economics and Business Solutions Journal 5, no. 2 (October 31, 2021): 89. http://dx.doi.org/10.26623/ebsj.v5i2.3493.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p><em>The purpose of this research is to analyze the factors that influence the value of the industry. The independent variables are investment opportunities, dividend policy, financial leverage, profitability. The population in this research is a non-financial industry listed on the IDX from 2017 to 2019. The illustrations for this research were selected using a purposive sampling procedure, and 200 illustrations were taken. Information analysis uses multiple regression procedures. The results of this research show that financial leverage, profitability and an independent board of commissioners affect the value of the industry, on the other hand, investment opportunities do not affect the value of the industry.</em></p>
3

Kung, Liang-Hsi, and Yu-Hua Yan. "Empirical Study on Hospitalist System: A Value Creation Perspective." Healthcare 12, no. 10 (May 7, 2024): 953. http://dx.doi.org/10.3390/healthcare12100953.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study investigates the impact of hospitalist system awareness, motivation, and behavior on value creation within the healthcare context of Taiwan. As population aging and the prevalence of chronic diseases continue to rise, accompanied by increased medical resource consumption, the Taiwan Ministry of Health and Welfare introduced the hospitalist system. Despite its implementation, the number of participating hospitals remains low. Using a questionnaire survey conducted from October 2021 to March 2022, data were collected from medical teams involved in the hospitalist system. A total of 324 valid questionnaires were analyzed. The results reveal that hospitalist awareness positively influences participation motivation (β = 0.846, p < 0.001), which subsequently impacts participation behavior positively (β = 0.888, p < 0.001). Moreover, participation behavior significantly contributes to value creation (β = 0.869, p < 0.001), along with the direct effect of awareness (β = 0.782, p < 0.001) on value creation. In conclusion, the successful promotion and implementation of the hospitalist system rely heavily on the support and active participation of medical staff. Effective interactions and comprehensive information dissemination are essential for maximizing healthcare value creation.
4

Ratmono, Dwi, and Darsono Darsono. "New public management and corruption: Empirical evidence of local governments in Indonesia." Public and Municipal Finance 11, no. 1 (June 7, 2022): 54–62. http://dx.doi.org/10.21511/pmf.11(1).2022.05.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study is relevant because it examines the determinants of corruption in local governments that have a negative impact on the success of sustainable development. This study aims to examine the effect of New Public Management (NPM), as measured by fiscal decentralization, financial reporting quality and independent audits, on the level of corruption. The sample consisted of 433 local governments in Indonesia based on data from 2011–2017. PLS-SEM was used as a data analysis technique. The results test shows that fiscal decentralization positively affects corruption with a path coefficient of 0.19 and a p-value of 0.004. The quality of financial reporting has a negative effect on the level of corruption with a coefficient of –0.26 and a p-value &amp;lt; 0.001. Hypotheses testing results also show that audit finding positively affects corruption with a coefficient of 0.10 and a p-value &amp;lt; 0.10. On the other hand, follow-up audit results have no significant effect on corruption with a p-value &amp;gt; 0.10. This study concludes that the NPM mechanism in the form of fiscal decentralization positively affects corruption. These results imply that fiscal decentralization needs to be balanced with good governance, among others, by increasing the quality of financial reports and independent audits.
5

Goodman, William M., Susan E. Spruill, and Eugene Komaroff. "A Proposed Hybrid Effect Size Plus p-Value Criterion: Empirical Evidence Supporting its Use." American Statistician 73, sup1 (March 20, 2019): 168–85. http://dx.doi.org/10.1080/00031305.2018.1564697.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Tabansi Okeke, Callistus, Chinwe Ann Anisiobi, and Chinwe Monica Madueke. "Public Debt and Economic Growth: Empirical Evidence from Nigeria." International Journal of Research and Innovation in Social Science VII, no. III (2023): 705–18. http://dx.doi.org/10.47772/ijriss.2023.7309.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study examined the impact of public debt on economic growth in Nigeria using annual secondary data from 1981 to 2021 and the Auto-Regressive Distributed Lag technique. The variables used in the study were real gross domestic product (RGDP), which is the proxy for economic growth, gross fixed capital formation (GFCF), external debt (EXDT), exchange rate (EXCR), domestic debt (DODT) and debt service repayment (DSRT). The results of the findings show that the past value of RGDP, GFCF, EXDT and DSRT have positive impact on economic growth in Nigeria. Also, EXCR and DODT have negative impact on economic growth in Nigeria. Judging from the p values, the lagged value of RGDP, GFCF, EXDT and DSRT are statistically significant as their p values are lower than critical values at 5 percent level of significance, while EXCR and DODT have no significant impact on economic growth in Nigeria. Based on the findings, the study recommends that government should formulate and effectively implement policy that would boost domestic revenue generation by broadening the revenue base, improving capacity to tax and curtailing ineffective government spending. Also, borrowed funds should be utilized for the diversification of the productive base of the economy.
7

Bian, N'dri Hubert. "A goodness-of-fit test based on Kendall’s process: Durante's bivariate copula models." Afrika Statistika 16, no. 3 (July 1, 2021): 2851–82. http://dx.doi.org/10.16929/as/2021.2851.187.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The proposed goodness-of-fit testing procedures for copula models are fairly recent. The new test statistics or omnibus tests are functional of an empirical process motivated by the theoretical and empirical versions of Kendall’s or Spearman's dependence function. In this paper, we propose a fitting procedure for a symmetric and flexible copula model with a non-zero singular component using the Kendall process. The conditions under which this empirical process weakly converges are satisfied. Using a parametric bootstrap method that allows to compute approximate p-values, it is empirically shown that tests based on the Cramer-von Mises distance keeps the prescribed value for the nominal level under the null hypothesis. Simulation studies that demonstrate the power of the fit test are presented.
8

Burucuoglu, Murat, and Evrim Erdogan. "An Empirical Examination of the Relation between Consumption Values, Mobil Trust and Mobile Banking Adoption." International Business Research 9, no. 12 (November 23, 2016): 131. http://dx.doi.org/10.5539/ibr.v9n12p131.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p>The purpose of this study is to examine the relations among consumption values of the consumers relevant to mobile banking services, adoption to mobile banking and mobile trust. For this purpose, we propose a structural model which demonstrates the relations between consumption values, mobile banking adoption and mobile trust of consumers. The data had been collected through survey applied on individuals who are using mobile banking services in Turkey. It had been reached to 175 participants in total. The obtained data had been analyzed by partial least squares path analysis (PLS-SEM) which is known as second generation structural equation modeling. As the result of the research, it had been concluded that the conditional value, emotional value and epistemic value –from among consumption values- have positive and statistically meaningful effect on adoption to mobile banking, and that the social value has negative and statistically meaningful effect. It is being observed that there is positive and statistically meaningful relation in between trust relevant to mobile banking and conditional value, emotional value and functional value. And there are positive and statistically meaningful relations on trust relevant to mobile banking and adoption to mobile banking.</p>
9

Kwon, Gee Jung. "The Value Relevance of Corporate Social Responsibility: Focusing on Donation Expenditure." Asian Social Science 12, no. 8 (July 7, 2016): 1. http://dx.doi.org/10.5539/ass.v12n8p1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p>This paper investigates the value relevance of corporate social responsibility. In particular, the paper examines the time lag value relevance of donation expenditure on firm value over the period of 2000–2014 in the listed Korean stock markets. Through empirical analysis, the paper provides evidence that donation expenditure has a significant effect on future firm value.</p><p>The empirical results of this paper support research hypothesis 1 (donation expenses have an effect on firm value) and research hypothesis 2 (donation expenses have a time lag effect on firms’ future value). In particular, the results show that donation expenses have an effect on firm value and the time lag interval is from two to 12 years. These results suggest that donation expenses can be regarded as assets that have potential for firms’ future cash flows.</p><p>The empirical evidence of this paper suggests there should be debate on whether the accounting treatment of donations should be changed in Korean accounting practices. </p>
10

Marsman, Maarten, and Eric-Jan Wagenmakers. "Three Insights from a Bayesian Interpretation of the One-Sided P Value." Educational and Psychological Measurement 77, no. 3 (October 5, 2016): 529–39. http://dx.doi.org/10.1177/0013164416669201.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
P values have been critiqued on several grounds but remain entrenched as the dominant inferential method in the empirical sciences. In this article, we elaborate on the fact that in many statistical models, the one-sided P value has a direct Bayesian interpretation as the approximate posterior mass for values lower than zero. The connection between the one-sided P value and posterior probability mass reveals three insights: (1) P values can be interpreted as Bayesian tests of direction, to be used only when the null hypothesis is known from the outset to be false; (2) as a measure of evidence, P values are biased against a point null hypothesis; and (3) with N fixed and effect size variable, there is an approximately linear relation between P values and Bayesian point null hypothesis tests.

Дисертації з теми "Empirical p-Value":

1

Pluntz, Matthieu. "Sélection de variables en grande dimension par le Lasso et tests statistiques - application à la pharmacovigilance." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASR002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
La sélection de variables dans une régression de grande dimension est un problème classique dans l'exploitation de données de santé, où l'on cherche à identifier un nombre limité de facteurs associés à un évènement parmi un grand nombre de variables candidates : facteurs génétiques, expositions environnementales ou médicamenteuses.La régression Lasso (Tibshirani, 1996) fournit une suite de modèles parcimonieux où les variables apparaissent les unes après les autres suivant la valeur du paramètre de régularisation. Elle doit s'accompagner d'une procédure du choix de ce paramètre et donc du modèle associé. Nous proposons ici des procédures de sélection d'un des modèles du chemin du Lasso qui font partie, ou s'inspirent, du paradigme des tests statistiques. De la sorte, nous cherchons à contrôler le risque de sélection d'au moins un faux positif (Family-Wise Error Rate, FWER), au contraire de la plupart des méthodes existantes de post-traitement du Lasso qui acceptent plus facilement des faux positifs.Notre première proposition est une généralisation du critère d'information d'Akaike (AIC) que nous appelons AIC étendu (EAIC). La log-vraisemblance du modèle considéré y est pénalisée par son nombre de paramètres affecté d'un poids qui est fonction du nombre total de variables candidates et du niveau visé de FWER, mais pas du nombre d'observations. Nous obtenons cette fonction en rapprochant la comparaison de critères d'information de sous-modèles emboîtés d'une régression en grande dimension, de tests multiples du rapport de vraisemblance sur lesquels nous démontrons un résultat asymptotique.Notre deuxième proposition est un test de la significativité d'une variable apparaissant sur le chemin du Lasso. Son hypothèse nulle dépend d'un ensemble A de variables déjà sélectionnées et énonce qu'il contient toutes les variables actives. Nous cherchons à prendre comme statistique de test la valeur du paramètre de régularisation à partir de laquelle une première variable en dehors de A est sélectionnée par le Lasso. Ce choix se heurte au fait que l'hypothèse nulle n'est pas assez spécifiée pour définir la loi de cette statistique et donc sa p-value. Nous résolvons cela en lui substituant sa p-value conditionnelle, définie conditionnellement aux coefficients estimés du modèle non pénalisé restreint à A. Nous estimons celle-ci par un algorithme que nous appelons simulation-calibration, où des vecteurs réponses sont simulés puis calibrés sur les coefficients estimés du vecteur réponse observé. Nous adaptons de façon heuristique la calibration au cas des modèles linéaires généralisés (binaire et de Poisson) dans lesquels elle est une procédure itérative et stochastique. Nous prouvons que l'utilisation du test permet de contrôler le risque de sélection d'un faux positif dans les modèles linéaires, à la fois lorsque l'hypothèse nulle est vérifiée mais aussi, sous une condition de corrélation, lorsque A ne contient pas toutes les variables actives.Nous mesurons les performances des deux procédures par des études de simulations extensives, portant à la fois sur la sélection éventuelle d'une variable sous l'hypothèse nulle (ou son équivalent pour l'EAIC) et sur la procédure globale de sélection d'un modèle. Nous observons que nos propositions se comparent de façon satisfaisante à leurs équivalents les plus proches déjà existants, BIC et ses versions étendues pour l'EAIC et le test de covariance de Lockhart et al. (2014) pour le test par simulation-calibration. Nous illustrons également les deux procédures dans la détection d'expositions médicamenteuses associées aux pathologies hépatiques (drug-induced liver injuries, DILI) dans la base nationale de pharmacovigilance (BNPV) en mesurant leurs performances grâce à l'ensemble de référence DILIrank d'associations connues
Variable selection in high-dimensional regressions is a classic problem in health data analysis. It aims to identify a limited number of factors associated with a given health event among a large number of candidate variables such as genetic factors or environmental or drug exposures.The Lasso regression (Tibshirani, 1996) provides a series of sparse models where variables appear one after another depending on the regularization parameter's value. It requires a procedure for choosing this parameter and thus the associated model. In this thesis, we propose procedures for selecting one of the models of the Lasso path, which belong to or are inspired by the statistical testing paradigm. Thus, we aim to control the risk of selecting at least one false positive (Family-Wise Error Rate, FWER) unlike most existing post-processing methods of the Lasso, which accept false positives more easily.Our first proposal is a generalization of the Akaike Information Criterion (AIC) which we call the Extended AIC (EAIC). We penalize the log-likelihood of the model under consideration by its number of parameters weighted by a function of the total number of candidate variables and the targeted level of FWER but not the number of observations. We obtain this function by observing the relationship between comparing the information criteria of nested sub-models of a high-dimensional regression, and performing multiple likelihood ratio test, about which we prove an asymptotic property.Our second proposal is a test of the significance of a variable appearing on the Lasso path. Its null hypothesis depends on a set A of already selected variables and states that it contains all the active variables. As the test statistic, we aim to use the regularization parameter value from which a first variable outside A is selected by Lasso. This choice faces the fact that the null hypothesis is not specific enough to define the distribution of this statistic and thus its p-value. We solve this by replacing the statistic with its conditional p-value, which we define conditional on the non-penalized estimated coefficients of the model restricted to A. We estimate the conditional p-value with an algorithm that we call simulation-calibration, where we simulate outcome vectors and then calibrate them on the observed outcome‘s estimated coefficients. We adapt the calibration heuristically to the case of generalized linear models (binary and Poisson) in which it turns into an iterative and stochastic procedure. We prove that using our test controls the risk of selecting a false positive in linear models, both when the null hypothesis is verified and, under a correlation condition, when the set A does not contain all active variables.We evaluate the performance of both procedures through extensive simulation studies, which cover both the potential selection of a variable under the null hypothesis (or its equivalent for EAIC) and on the overall model selection procedure. We observe that our proposals compare well to their closest existing counterparts, the BIC and its extended versions for the EAIC, and Lockhart et al.'s (2014) covariance test for the simulation-calibration test. We also illustrate both procedures in the detection of exposures associated with drug-induced liver injuries (DILI) in the French national pharmacovigilance database (BNPV) by measuring their performance using the DILIrank reference set of known associations
2

Valentinis, Edi. "Variable interest consolidation (FASB,FIN 46/R) : valve relevance and empirical consequences on financial reporting." Doctoral thesis, Università degli studi di Trieste, 2008. http://hdl.handle.net/10077/3094.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
2007/2008
FASB introduction of FIN 46/R variable interest consolidation model proved revolutionary as it ties up the accounting to the economic/financial frameworks and the judicial one. Legal structures and agreements among stakeholders of entities, creating net assets’ variability, have from now on to be compared with expected losses and expected returns distribution, prior to identify which stakeholder will need to consolidate pursuant this Interpretation. As a result, return variability gains weight in the definition of variable interest entity with consequences still to be completely digested by practitioners and reporting enterprises. Because of the implementation of this Interpretation, consolidation by a party that absorbs most of the entity expected losses will have precedence even over stock-ownership’s control by the parent company (voting rights driven). The revolution though is only meant for a wide, yet selected, subset of entities' classes being securitisations, life and health insurances and governmental organisations aimed for profit, left outside the scope of this Interpretation. Consolidation through variable interest model is the result of four major steps. First alone is the definition of entity, being any legal structure to conduct activities and hold assets. Second, the identification of the variable interests in it, deriving from recognition of the aggregate which fair value changes with changes in fair value of net assets, exclusive of variable interests. These changes in fair value are considered regardless of embedded voting rights; hence, mezzanine finance, preferred stock and any hybrid equity instrument in general need to be detailed in their features prior to taking further decision. Third, the estimate of expected losses and residual returns whose value relevance has been given vast insight in this paper. Fourth and last step, the recognition of the primary beneficiary, when it exists, which is the party that absorbs the greatest share of expected losses and/or that benefits the most from expected residual returns and ultimately, the party that will consolidate the variable interest in object. Throughout the variable interest consolidation process, the concept of ‘equity at risk’ is introduced by FASB to define which is the effective portion of equity that absorbs variability created by net assets of the variable interest entity. Notwithstanding an introduced sufficiency test, aimed at deducting from US GAAP equity, all components that are not legal obligations to capitalise the entity, still difficulties exist. This is due to a series of exclusions namely; legal equity is to be deducted of fees, loans or guarantees thereof, shares issued in exchange of subordinated interests in other VIEs shall be subtracted as well from equity at risk, in the end also investments to be considered non significant shall be deducted. In this regard, valuations are either explicitly or implicitly to be done at fair value, hence book values need to make room for financial analysis giving in this respect value relevance to the Interpretation. The ‘equity at risk’ concept is the result of deductions that run through both sides of the balance sheet. Particular judgment shall be used in evaluating guarantees and other off-balance sheet obligations. This paper takes also in consideration the test proposed by FASB for ‘non significant investments’ proposing a refined method to reduce variability in interpretative judgment by the reporting entity. Furthermore, FASB identifies a new category of VIEs: variable interests in specified subset of assets of a VIE (i.e. a guarantee) which can be treated as distinctive VIEs by FASB only if the fair value of the same assets is greater than 50% of the whole fair value of the entity. If so happens, then equity at risk is to be deducted accordingly and expected losses/residual returns (EXLS/EXRR) of this subset of assets is not considered for sake of determining the primary beneficiary. The distinct VIE, which in accounting goes also under the name of Silo, will have to be treated separately as another VIE. From this analysis on assets and financial structure, which is derived from CON 6, FASB correctly deconstructs the accountancy legacy notion of control by segregating the decision making ability on the VIE from the variability absorption rights and obligations. The former, given by the financial decisions on VIE’s financial structure and by investment on net assets, the latter dictated by obligation to fund losses and to receive residual returns, i.e. by assigning the right to receive future residual returns and the obligation to make future capital contributions. Under a valuation viewpoint, assets and liabilities of newly consolidated VIE are measured at fair value while the ones already pertaining to a primary beneficiary, which is already a parent, remain reported at carrying value being already in the consolidated balance sheet of the controlling company. FIN 46/R in this way allows goodwill to be recognised for acquisitions of VIEs, which constitute businesses for use in this Interpretation. If the consideration paid for the VIE interest (carrying value plus premium/discount) is instead lower than the fair value of its net assets at consolidation, then a decrease in value of the newly consolidated assets shall be reported. Exception is made by cash & marketable securities, tax assets, post retirement plans and the likes. In this regard VIEs, which are not businesses will originate extraordinary gains or losses accordingly, in case of extraordinary gains, the value of the newly acquired assets is stepped-up pro quota. While FIN 46/R valuation principles of expected losses, expected residual returns and definition of balance sheets arising from VIE consolidation, resides on fair values, practitioners and reporting enterprises alike base their forecast from use of private information. This in turn, gives birth to entity-specific values, which take into account private information comprising of entity plans and current competitive strategy, which are a function of present industry positioning. Part of the process in determining EXLS/EXRR and the existence or not of a primary beneficiary, in line with the variable interest consolidation model, is to go through a profit variability analysis to be done through discounted cash flow models. To try to shed some more light on this regard we have first refreshed the mathematics of series of random variables with the objective to estimate VIEs’ expected cash flows of income. VIEs are generally modelled as a random variable with statistic mean different from statistic mode, a fact omitted in some passages of FIN 46/R exposition. Subsequently we have underlined that the variability of returns is directly related with the interval of confidence set for distribution functions representing random variables when computing the reporting entity forecast of expected variability. The potential deadlock could be widening when different interest holders are implementing different modelling of the reporting entity which yield to different results, but still acceptable under the Interpretation prescriptions. FASB introduction of non-previous US GAAP measures like EXLS/EXRR are, as we believe, in need to be backed up by a more robust theoretical framework. To do so, we needed to characterise the choice of the discount rate. In this framework, we have once again taken the theoretical basis of cost of capital, highlighting the equivalence of the results of other methods; including pros and cons of the utility functions and certainty equivalence method and the risk adjusted probability method. We have then given evidence on why FASB should use the cost of capital method as the discount rate to compute income variability together with income streams. In fact, by using the cost of capital method, and the WACC deriving from CAPM, all financial risk is embedded in the discount rate leaving the reporting enterprise free to express in the books the operational risks known or of most suitable estimation. Nowadays marginal cost of debt and market value of equity are used in common practice, according with CAPM theory, and have their use extended to private businesses. The cost of capital for private enterprises make use of sensitivity correlation coefficient of the enterprise return over the market return (beta coefficient) are of difficult estimate for private entities although betas can be computed in a number of ways using assumptions which are proper of the enterprise and its industry peers. To close the chapter related to valuation, finally we have focused on how these methodologies are being implemented by corporate America realising that the fears for value relevancy and hardship in tailoring the application to the single entities is a shared feeling and still a process far from crystallisation. In particular, FASB does not impose a clear conversion from book values to either fair values or value-in-use ones. It neither rules out the use of different valuation methods, if not for particular aspects treated within its FSP 46/R-S, in the exercise of computation of expected variability, which we have to recognise has not been proper of the accountancy function until lately. This thesis proposes an algorithm that goes in detail in the application of FIN 46/R for a reporting enterprise taking into account all possible interrelations among interest holders and distinct interest in subset of assets. The algorithm brings to light the weaknesses in application of the Interpretation caused by potential interrelations between expected losses assessment and variable interests in specified assets, wherever the fair value of these is more than 50% of net assets, i.e. distinctive VIEs. The algorithm, despite being in line with FIN 46/R prescriptions, does not cope with situations of cross default of related parties’ investors in the same VIE. However while the application of a cause and effect model is not always possible we think increased consolidation constraints would highly reduce these possibilities. In the process for determining if the reporting entity is a VIE, FASB develops also the ‘at risk’ test, highlighting once again the relevant weaknesses of the concepts of ‘previous ability to finance operations without subordinate financial support’ and ‘comparability with other similar entities which autonomously finance themselves without subordinated support’. We believe that the "at risk test" should only be a numeric test to iron out misinterpretations and gain relevance in consistency. FASB introduction of an exclusion sufficiency test to exclude variable interests for being classified as VIEs leaves, in our opinion, some uncertainties to the ‘participation in VIE design’ concept or to the ‘non significant interest’ one. This test, we believe, ought not to be a determinant factor, the level of polarisation of risk/reward of the consideration should instead be the sole paramount predictor for exclusion. As far as the conditions used to determine if the entity has sufficient equity to sustain its operations without financial support, the condition sine qua non of the minimum 10% of equity value over total assets, coupled with the triad of valuation methods proposed by FASB, should have been more stringent and concise in its ruling. In fact, these methods leave again interpretative flexibility about the inputs used to demonstrate sufficiency. From a thorough profit variability analysis the thesis compares how the responsibilities and efforts to cope with FIN 46/R requirements are distributed among VIE stakeholders, namely auditors, reporting enterprises, standard setters and regulators. This has been done comparing the use of CON 7 approach to the traditional cost of capital approach used in corporate finance. Furthermore, we have put in evidence that by implementing FIN 46/R VIEs entities tend naturally to overstate income variability valuations, being income streams discounted at Rf, heightening capital requirements. We would like to close by making a forecast on long-term developments that we envisage this Interpretation will bring forward, by starting to think on which are the VIEs stakeholders that are bound to be the most disadvantaged. This is again the class of primary beneficiaries of smaller sizes, which will have either to recourse to more lending to cover for capitalisation requirements and increased financial leverage, or face financial distress. Both cases are precursors to industry consolidation and forebears of globalisation, while the class most favoured will be the banking industry.
RIASSUNTO (ITALIAN): L’introduzione del FIN 46/R (FASB Interpretazione N. 46/R) da parte del FASB (Financial Accounting and Standards Board) si è dimostrata rivoluzionaria grazie al nuovo modello di consolidamento che si interpone tra il contesto economico finanziario e quello legale delle entità oggetto di questa interpretazione. Forma legale e relativi accordi tra stakeholders delle entità, definite come qualsiasi forma legale di impresa e veicolo finanziario, devono d’ora in poi essere confrontati con un’analisi della variabilità attesa degli utili prima di identificare quale stakeholder debba consolidare l’entità in oggetto (beneficiario primario). Di conseguenza il concetto di variabilità (varianza) dei redditi acquista un peso determinante nella definizione di variable interest entity (VIE) con conseguenze che devono essere ancora completamente digerite da professionisti e imprese che devono adeguarsi a questa interpretazione contabile. In virtù della stessa il consolidamento da parte del portatore di interessi che assorbe la maggioranza delle perdite attese ora avrà la precedenza perfino sull’azionista o sulla controllante che dovesse detenere la maggioranza assoluta dei diritti di voto. Questa rivoluzione è stata per ora intesa per un vasto, ma selezionato, insieme di classi di imprese, essendo ad esempio SPV di assicurazioni vita e veicoli finanziari di enti governativi a scopo di lucro lasciati (per ora) fuori dall’ambito di questa interpretazione. Il consolidamento attraverso il modello variable interest (VI) è il risultato di quattro passi. Innanzitutto, la definizione di entità comprendente qualsiasi forma legale intesa a compiere un’attività economica o a possedere degli attivi. Secondariamente l’identificazione dei cosiddetti interessi variabili nell’entità precedentemente definita; questi VI derivano dall’identificazione dell’aggregato dell’entità in analisi il cui fair value muta di valore al variare del valore dei net assets dell’entità al netto degli stessi interessi variabili. Le variazioni del fair value di questi asset sono considerate indipendentemente dai diritti di voto a loro associati, quindi forme ibride di capitale azionario quali azioni privilegiate, mezzanini e altri strumenti affini devono avere chiaramente dettagliate le loro caratteristiche prima di poter analizzare il loro comportamento e poter prendere una decisione. Terzo punto, la stima della variabilità attesa degli utili (perdite potenziali attese e utili residui attesi) della VIE la cui rilevanza ai fini della teoria del valore è stata data ampia trattazione in questa tesi. Quarto e ultimo passo, l’identificazione del beneficiario primario, quando questo esista, definito come la parte che assorbe la porzione maggiore di perdite e/o beneficia maggiormente degli utili residui e che, in ultima analisi, deve consolidare l’entità a interesse variabile in oggetto. Altrimenti la VIE è considerata tale da distribuire sufficientemente il rischio tra gli stakeholder. Attraverso il processo di consolidamento il concetto di ‘capitale azionario a rischio’ (Equity at risk) è introdotto da FASB per definire la frazione del capitale azionario che assorbe effettivamente la variabilità creata dal capitale investito netto (Net Assets) della VIE. Nonostante un apposito test (condizione sufficiente) sia stato proposto da FASB alcune difficoltà interpretative sono ancora presenti. Queste sono dovute ad una serie di deduzioni dal capitale legale che deve essere dedotto di pagamenti per servizi, prestiti o garanzie degli stessi. Azioni emesse in cambio di interessi subordinati in altre VIE dovranno altresì essere dedotti dal totale dell’Equity at Risk, così pure per gli investimenti di valore cosiddetto trascurabile (non-significant). Tutte le valutazioni al riguardo devono essere fatte al fair value, quindi i valori contabili dovranno sempre fare spazio all’analisi finanziaria dando rilevanza ai fini del valore a questa interpretazione. Il concetto di ‘equity at risk’ è il risultato di deduzioni prese da entrambi i lati dello stato patrimoniale. Particolare attenzione è richiesta nella valutazione delle garanzie e altri obblighi fuori bilancio. Questa tesi prende in considerazione anche il test proposto da FASB per valutare gli investimenti trascurabili (non-significant) proponendone uno alternativo che, secondo il nostro giudizio, ne riduce la varianza interpretativa in ambito di redazione del bilancio. Da questa analisi sugli asset e sulla struttura finanziaria, in accordo con i concetti CON 6, FASB correttamente smonta la nozione di controllo ereditata dall’attuale contabilità separando la capacità di prendere decisioni di gestione della VIE da obblighi e diritti di assorbimento della variabilità dei risultati economici della stessa. La prima è data dalle decisioni sulla struttura finanziaria e da quelle in merito agli investimenti nel capitale investito, la seconda dettata dagli obblighi di ricapitalizzare le perdite e di ricevere utili residui. All’atto del consolidamento gli elementi di stato patrimoniale della VIE vengono misurati al fair value mentre quelli che già sono di pertinenza del beneficiario primario con precedente ruolo di controllante (Parent Company) rimangono iscritte a bilancio al valore di carico essendo già parte del bilancio. In questo modo FIN 46/R permette il riconoscimento di un avviamento (goodwill) all’acquisizione di una VIE che si possa considerare come un’impresa ai fini di questa interpretazione. Se invece il prezzo corrisposto per l’interesse acquisito (valore di carico +/- premium/discount) è inferiore al fair value dei suoi net assets per effetto del consolidamento si dovrà registrare una diminuzione di valore degli asset appena consolidati. Eccezion fatta per cassa, crediti di imposta, fondi TFR e simili. In questo caso VIE che non sono assimilabili ad imprese origineranno conseguentemente una perdita (o utile) straordinaria, in caso di utile straordinario il valore del nuovo asset acquisito è aumentato pro-quota. Mente i principi di valutazione del FIN 46/R che riguardano la definizione di valori di bilancio originatisi dal consolidamento della VIE, risiedono interamente nel fair value, a professionisti e imprese è richiesto invece di basare le loro previsioni di variabilità degli utili su informazioni private, che quindi danno origine a valori di tipo entity-specific, comprensive dei piani aziendali in accordo con la strategia industriale adottata, che sono funzione dell’attuale posizionamento competitivo di settore. Questo è causa di problemi legati alla divulgazione di informazioni e indirettamente alla tracciabilità dei risultati. Parte del processo utilizzato per l’applicazione del VIE model passa per la stima della variabilità degli utili (Expected Lossess, Expected Residual Returns, EXLS/EXRR) e per la verifica dell’esistenza o meno del beneficiario primario. La stima è il frutto di un’analisi di variabilità (varianza) dei redditi attraverso l’uso di DCF (discounted cash flow models). Per fare chiarezza su questo punto abbiamo prima rivisitato alcuni aspetti delle serie di variabili aleatorie con l’obiettivo di caratterizzare il contesto teorico a corredo della stima del reddito/utile atteso della VIE. VIE possono essere generalmente modellizzate come una variabile aleatoria con una media statistica in generale diversa dalla moda statistica, un fatto omesso in alcuni passaggi dell’esposizione del FIN 46/R che può portare ad incertezze in fase implementativa dell’interpretazione. Successivamente abbiamo sottolineato che la variabilità dei redditi è direttamente connessa all’intervallo di confidenza fissato per le funzioni di distribuzione rappresentanti variabili aleatorie durante il calcolo della variabilità attesa della VIE. Il potenziale impasse si potrebbe allargare qualora differenti stakeholders dovessero usare un modello di stima diverso della VIE che potrebbe portare a risultati, seppur diversi, ugualmente accettabili secondo le prescrizioni di questa interpretazione. L’introduzione di definizioni quali EXLS/EXRR, precedentemente non parte dei principi US GAAP, crediamo necessitino di una più robusta trattazione teorica. Per fare questo abbiamo caratterizzato anche la scelta del saggio di sconto che FASB indica come il tasso privo di rischio. In questo contesto abbiamo preso come base la teoria del costo del capitale per poi evidenziare i punti deboli e quelli di forza di alcuni metodi quali l’equivalente certo, il metodo del costo del capitale e quello della probabilità corretta per il rischio (risk adjusted probability). Abbiamo quindi dato evidenza alle ragioni per cui FASB dovrebbe usare il metodo del costo del capitale che è dato dal tasso di sconto impiegato per calcolare la variabilità del reddito derivante dall’attualizzazione dei flussi di reddito. Infatti, usando il metodo del costo del capitale, il WACC derivante dall’implementazione del CAPM sconta tutto il rischio finanziario nel tasso, lasciando all’impresa libertà di esprimere nei libri contabili, e quindi nei flussi di reddito corrispondenti, il rischio operativo che è invece affine all’attività di impresa e reporting. Al giorno d’oggi il costo marginale del debito e il valore di mercato del capitale azionario sono concetti consolidati nella pratica contabile e possono essere estesi a imprese private. Il costo del capitale per queste ultime deriva dall’uso del coefficiente di correlazione degli utili d’impresa su quelli di mercato (coefficiente beta) di difficile stima per aziende private, sebbene questo possa essere ricavato in più di un modo, implementando ipotesi che sono proprie del contesto dove l’impresa e i suoi concorrenti operano. Abbiamo riassunto i modelli emergenti dal modo come queste metodologie vengano correntemente impiegate dalle imprese americane, realizzando che i sentimenti connessi all’adattamento dell’interpretazione FIN 46/R alle caratteristiche proprie dell’impresa siano di timore e incertezza dati da una notevole difficoltà di applicazione compresa quella di estrapolare un sufficiente grado di rilevanza ai fini del valore dai propri eventi contabili. La situazione é prodroma di processo ancora lontano dalla cristallizzazione. In particolare FASB non impone una chiara conversione dei valori contabili in fair value oppure in value in use. Nemmeno sono esclusi metodi alternativi di valutazione a quelli menzionati di sopra se non fosse per alcuni aspetti trattati dall’FSP 46/R-S nell’esercizio di determinare la variabilità attesa degli utili che dobbiamo riconoscere non è stata propria della contabilità fino a poco tempo fa. Per entrare in dettaglio nel processo applicativo di identificazione di una VIE questa tesi propone un algoritmo che entra in dettaglio nell’applicazione del FIN 46/R da parte di un’impresa e tiene in considerazione tutte le possibili interrelazioni tra portatori di interessi nella VIE e/o solamente in specifici asset della stessa. L’algorimo pone in luce le debolezze sul piano applicativo causate da possibili interrelazioni tra la stima delle perdite attese e interessi in asset specifici, laddove il fair value di questi sia superiore al 50% del capitale investito netto. L’algoritmo, nonostante sia in accordo con le prescrizioni dettate dal FIN 46/R, essendo di tipo causa-effetto non affronta situazioni di cross-default di parti correlate con investimenti nella stessa VIE. Benchè l’applicazione di un modello causa-effetto non sia sempre possibile, pensiamo che un aumento dei vincoli che portano al consolidamento riduca ampiamente queste possibilità di difficile modellizzazione. Nel processo per la determinazione se l’impresa sia o meno una VIE, FASB sviluppa un test ‘at-risk’ che contiene a nostro avviso alcuni passi nella propria trattazione di relativa debolezza quali ‘precedente abilita a finanziare le attività senza supporto finanziario subordinato’ e ‘ confrontabilità con simili entità che autonomamente si finanziano senza supporto finanziario subordinato’. Crediamo che questo test ‘at-risk’ dovrebbe essere solamente un test di tipo numerico per appianare qualsiasi fonte di erronea interpretazione ed incrementarne quindi la rilevanza e consistenza. L’introduzione di FASB di una condizione sufficiente da applicare ad una entità per la sua esclusione dalla categoria delle VIE lascia a nostro avviso alcune incertezze nell’interpretazione del concetto di ‘partecipazione nella definizione della VIE’ o in quella di ‘interesse trascurabile’. Questo test crediamo non debba essere trattato come un fattore determinante; la polarizzazione tra rischio e rendimento invece crediamo debba essere il fattore primario per l’esclusione o meno. Per quanto riguarda le condizioni in uso per determinare se l’entità ha sufficiente capitale per sostenere le proprie attività senza sostegno finanziario, conditio sine qua non del 10% di equity sul capitale investito netto, accoppiata ad una triade di metodi valutativi sempre proposti da FASB, pensiamo avesse dovuto essere maggiormente concisa e vincolante nelle sue pronunciazioni. Infatti siamo dell’opinione che questi metodi lascino troppa flessibilità interpretativa circa l’uso delle ipotesi concesse per dimostrare la sufficienza del capitale investito. Questi temi sono stati trattati dal punto di vista operativo con una serie di esempi creati ad hoc per illustrare i passi più significativi, dal punto di vista finanziario, nell’applicazione del VIE model e sollevare potenziali criticità proponendone una loro soluzione. Infine, questa tesi confronta come le responsabilità e gli sforzi nell’affrontare le disposizioni del FIN 46/R siano distribuite tra gli stakeholders di una VIE, cioè imprese che redigono il bilancio, parti correlate, revisori, standard setters ed enti di controllo. Abbiamo messo in evidenza come l’implementazione del FIN 46/R spinga naturalmente ad una sovrastima della variabilità stimata degli utili, innalzando i requisiti di capitalizzazione in accordo con questo modello di rischio/rendimento. Questo a svantaggio di beneficiari primari di modeste capitalizzazioni, che dovranno affrontare sia il rischio di essere acquisiti che quello di un maggiore ricorso al debito. Le classi più avvantaggiate saranno invece il settore del credito, seppure lo stesso sarà portato internamente verso il consolidamento.
XXI Ciclo
1972
3

Werning, Jan P. [Verfasser], Stefan [Gutachter] Spinler, and Carl Marcus [Gutachter] Wallenburg. "The transition from linear towards circular economy business models : theoretical and empirical study of boundary conditions and other effects on the value chain / Jan P. Werning ; Gutachter: Stefan Spinler, Carl Marcus Wallenburg." Vallendar : WHU - Otto Beisheim School of Management, 2021. http://d-nb.info/1231792108/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Callegaro, Ilaria <1997&gt. "How corporate hedging affects firm value: Empirical evidence from European experience." Master's Degree Thesis, Università Ca' Foscari Venezia, 2021. http://hdl.handle.net/10579/20202.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Risk is linked to a condition of partial lack of knowledge and information in the time to come. This characteristic makes it challenging for economic agents to provide an accurate measure of the probability distribution concerning a particular event and the magnitude of consequences related to its occurrence. Although risk can have very negative influences, it is also the reason why businesses are compensated with higher returns. Therefore, it should not be perceived only in a downward perspective, since it can offer to companies the opportunity to create additional value, increase competitive power and achieve targets. In particular, the implementation of appropriate risk management programs allows enterprises to proactively manage risky forces, while still complying with their short and long-term objectives. The purpose of the thesis is investigating the rational underlying the decision of implementing risk management programs and their effect on firm’s value. The analysis will start with a review of the theoretical and methodological perspectives, providing rationales on risk management and corporate hedging. In the theoretical assumption of perfect capital markets, the risk management discipline is considered to be irrelevant for the creation of firm value. Without taxes, agency costs, asymmetric information, costly external sources of finance, direct and indirect costs of bankruptcy, a company is equally likely to perform well regardless its financing choices and risk management decisions. In practice, the occurrence of risk along with the presence of financial market imperfections, makes companies behave in a risk-adverse manner. Considering the increasing centrality that risk has assumed on the decision-making process, the discipline of risk management has become a meaningful element incorporated into the strategy of the firm. Generally, risk management is considered to be primarily a defensive move, thereby the decision of companies to hedge their position is associated with the activity of holding financial derivatives and insurance policies. Actually, this practice represents just one side of the coin. Risk management has to be analyzed more broadly in order to include all the facets and actions taken by firms to deal with uncertainty and it has to consider also the managerial capability to exploit opportunities and the ability to gain competitive edge. Since businesses are vulnerable to a comprehensive risk package, theoretical literature operates an important classification on the basis of the risk nature. More precisely, it clusters risk into two components: the market and corporate risk. While market risk has an exogenous nature and it arises from macroeconomic factors, corporate risk has an endogenous nature since it is related to specific characteristics that are distinctive and specific for the single business. Depending on the nature of risk source, businesses handle risk and uncertainty by adopting financial or operational hedging programs. While financial derivatives and insurance contracts have been preferred at the firm level for handling respectively market and insurable risk, operational hedging techniques manage the risk associated with the specific characteristics of the business. Each company that is involved in risky activities need to estimate the extent of risk to which it is exposed and the impact of its endogenous and exogenous component. In order to understand the reason why some firms hedge their risk exposure and how they do so, an empirical analysis based on the European experience will be conducted. The purpose of the analysis consists in verifying whether a particular correlation or pattern among the set of companies operating in the Euro Area actually exist. Furthermore, demographic and financial information will be examined in order to carry out a more detailed and in-depth analysis of the object of study.
5

Rahal, Abbas. "Bayesian Methods Under Unknown Prior Distributions with Applications to The Analysis of Gene Expression Data." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42408.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The local false discovery rate (LFDR) is one of many existing statistical methods that analyze multiple hypothesis testing. As a Bayesian quantity, the LFDR is based on the prior probability of the null hypothesis and a mixture distribution of null and non-null hypothesis. In practice, the LFDR is unknown and needs to be estimated. The empirical Bayes approach can be used to estimate that mixture distribution. Empirical Bayes does not require complete information about the prior and hyper prior distributions as in hierarchical Bayes. When we do not have enough information at the prior level, and instead of placing a distribution at the hyper prior level in the hierarchical Bayes model, empirical Bayes estimates the prior parameters using the data via, often, the marginal distribution. In this research, we developed new Bayesian methods under unknown prior distribution. A set of adequate prior distributions maybe defined using Bayesian model checking by setting a threshold on the posterior predictive p-value, prior predictive p-value, calibrated p-value, Bayes factor, or integrated likelihood. We derive a set of adequate posterior distributions from that set. In order to obtain a single posterior distribution instead of a set of adequate posterior distributions, we used a blended distribution, which minimizes the relative entropy of a set of adequate prior (or posterior) distributions to a "benchmark" prior (or posterior) distribution. We present two approaches to generate a blended posterior distribution, namely, updating-before-blending and blending-before-updating. The blended posterior distribution can be used to estimate the LFDR by considering the nonlocal false discovery rate as a benchmark and the different LFDR estimators as an adequate set. The likelihood ratio can often be misleading in multiple testing, unless it is supplemented by adjusted p-values or posterior probabilities based on sufficiently strong prior distributions. In case of unknown prior distributions, they can be estimated by empirical Bayes methods or blended distributions. We propose a general framework for applying the laws of likelihood to problems involving multiple hypotheses by bringing together multiple statistical models. We have applied the proposed framework to data sets from genomics, COVID-19 and other data.
6

De, Luca Roberto. "Brand value, marketing investments and capital structure. An empirical analysis." Doctoral thesis, Universita degli studi di Salerno, 2011. http://hdl.handle.net/10556/263.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
2009 - 2010
An ever growing number of businesses is becoming aware that one of the highest value business assets in the current competitive context is the brand associated with a company’s products or services. In a world which is becoming ever more complex and turbulent, individuals and businesses today find themselves having to confront a wider range of choices than ever before, while having less and less time in which to consider their buying decisions. Consequently, the ability of a brand to simplify consumer choices, reduce perceived risk and create expectations has significant value. The aim of the present work is to take an in depth look at some aspects relating to brands, placing particular emphasis on brand equity, on the process relating to brand valuation, and on the influence of brand equity on corporate finance, normally considered to be unrelated to factors not themselves closely connected to financial logics. The first part of the research, of a theoretical type, is dedicated to a wide literature review, with the aim of analysing the contributions on the subject already present in the academic world, providing, in addition, some innovative ideas, especially regarding: - The evolution of brand and brand equity and the role of the aforementioned over the years, moving from a customer-based viewpoint to a stakeholder-based perspective. - The problems connected to brand valuation and the most used techniques, making an attempt, moreover, to offer an innovative viewpoint and enrichment of the methodologies and techniques used up until now for brand valuation. - Implementation of the first borderline models which connect the brand equity aspect with finance and accounting profiles. - The influence of brand and brand-building investments on corporate finance, including aspects such as IPOs, the value of shares on the stock market, company value etc. The second part of the research paper, detailed in the fourth and final chapter, is reserved for empirical analysis with the purpose of evaluating whether or not some of the hypotheses theoretically outlined previously find a basis in reality. Through a prevalently quantitative investigation an attempt will be made to demonstrate the existence of a link between marketing spend, especially investments aimed towards brand building, brand value expressed in monetary terms and business capital structure. The examination of this correlation, which has not yet been explored in existing literature, constitutes an important plus, and hopefully is an element of great originality. In addition, if the output of the analysis should prove to be statistically significant, it will be possible to attribute ulterior scientific relevance to the research activities carried out. [a cura dell'Autore]
IX n.s.
7

NATALE, ELISA. "La value relevance: aspetti teorici e verifiche empiriche nel settore bancario europeo." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2015. http://hdl.handle.net/10281/77102.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Riuscire ad individuare le relazioni che intercorrono tra i valori di bilancio e le informazioni che i lettori traggono da esso, ha suscitato sempre molto interesse negli studi di Accounting. Per i principi contabili lo scopo del bilancio è di fornire agli investitori informazioni utili nelle loro decisioni economiche. L'informativa contabile non è definita value relevant se non esiste nessuna relazione tra i valori di bilancio e il valore aziendale. Questo lavoro di tesi approfondirà i modelli che hanno chiarito il legame tra l’informativa contabile e il valore aziendale. In particolare, verranno analizzati i principali modelli nel tempo proposti dai vari Autori, indicando le principali ricerche empiriche svolte, i problemi sorti e le eventuali soluzioni poste dalla dottrina per affrontarli. Quindi, verranno approfondite alcune verifiche empiriche, in particolare nel settore bancario europeo.
8

SOLERIO, CHIARA. "LE VERE ESPERIENZE DI MARCA. UNA RICERCA EMPIRICA." Doctoral thesis, Università Cattolica del Sacro Cuore, 2015. http://hdl.handle.net/10280/6048.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Questo studio esamina come un'esperienza di marca possa essere definita tale. La rilevanza del tema inerisce a due dimensioni. Da un lato, l’attuale letteratura di marketing è ricca di research-call che spingono i ricercatori ad una miglior comprensione dell’universo di marca soprattutto attraverso l’impiego di nuove prospettive. Dall’altro lato, il mercato evidenzia un fenomeno complesso. A condizione di esperienze di marca più frequenti, e di individui che dimostrano una crescente volontà di essere ingaggiati dalle marche, aumenta la distanza tra individuo e marca. Questo studio intende fornire un framework teorico per meglio comprendere il fenomeno della brand exerience; vuole quindi indagare il meccanismo con cui brand ed esperienza sono in grado di dialogare all’interno di un setting che vede l’individuo protagonista. Il lavoro combina l’analisi della teoria esistente, con un’indagine condotta sul campo avente ad oggetto il Thununiversum. Lo studio esplora i principali brand meanings che promuovono il consumo esperienziale di marca. Appoggiandosi agli stream di ricerca inerenti all’esperienza, raffigura un nuovo setting esperienziale entro cui l’esperienza si genera. I findings del lavoro suggeriscono poi che l’esperienza vissuta favorisce un processo di riqualificazione del senso di marca. Quindi lo studio si chiude con una definizione del costrutto di brand experience.
This study examines how a brand experience can really be defined as such. There are several calls in the marketing literature for research to develop a greater understanding of brands (Ballantyne and Aitken, 2007; Brodie et al., 2006; Jevons, 2007) especially using new perspectives. At the same time, on the market side, consumers want increasingly to be engaged by brands and companies project a growing amount of brand experience. However, there’s a deep mismatching between brands and people. This article wants to make advances in applying experience theory to brand dimension by clarifying the boundaries and specifying the relational mechanisms that occurs between the two main constructs: brand and experience. Field research in Thununiversum context (Eisenhardt&Graebner, 2007) is combined with analysis of existing theory to develop the findings of this study (Burawoy, 1991). This qualitative study explores the main brand meanings that precede then promote the experiential consumption. Then, based on the experience literature, a typology is derived that classifies the core dimensions within a brand experience setting. The findings suggest that the experience provided by a brand contributes to a process of individual meanings’ requalification. The emerging model try, finally, to design a new concept of brand experience.
9

SOLERIO, CHIARA. "LE VERE ESPERIENZE DI MARCA. UNA RICERCA EMPIRICA." Doctoral thesis, Università Cattolica del Sacro Cuore, 2015. http://hdl.handle.net/10280/6048.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Questo studio esamina come un'esperienza di marca possa essere definita tale. La rilevanza del tema inerisce a due dimensioni. Da un lato, l’attuale letteratura di marketing è ricca di research-call che spingono i ricercatori ad una miglior comprensione dell’universo di marca soprattutto attraverso l’impiego di nuove prospettive. Dall’altro lato, il mercato evidenzia un fenomeno complesso. A condizione di esperienze di marca più frequenti, e di individui che dimostrano una crescente volontà di essere ingaggiati dalle marche, aumenta la distanza tra individuo e marca. Questo studio intende fornire un framework teorico per meglio comprendere il fenomeno della brand exerience; vuole quindi indagare il meccanismo con cui brand ed esperienza sono in grado di dialogare all’interno di un setting che vede l’individuo protagonista. Il lavoro combina l’analisi della teoria esistente, con un’indagine condotta sul campo avente ad oggetto il Thununiversum. Lo studio esplora i principali brand meanings che promuovono il consumo esperienziale di marca. Appoggiandosi agli stream di ricerca inerenti all’esperienza, raffigura un nuovo setting esperienziale entro cui l’esperienza si genera. I findings del lavoro suggeriscono poi che l’esperienza vissuta favorisce un processo di riqualificazione del senso di marca. Quindi lo studio si chiude con una definizione del costrutto di brand experience.
This study examines how a brand experience can really be defined as such. There are several calls in the marketing literature for research to develop a greater understanding of brands (Ballantyne and Aitken, 2007; Brodie et al., 2006; Jevons, 2007) especially using new perspectives. At the same time, on the market side, consumers want increasingly to be engaged by brands and companies project a growing amount of brand experience. However, there’s a deep mismatching between brands and people. This article wants to make advances in applying experience theory to brand dimension by clarifying the boundaries and specifying the relational mechanisms that occurs between the two main constructs: brand and experience. Field research in Thununiversum context (Eisenhardt&Graebner, 2007) is combined with analysis of existing theory to develop the findings of this study (Burawoy, 1991). This qualitative study explores the main brand meanings that precede then promote the experiential consumption. Then, based on the experience literature, a typology is derived that classifies the core dimensions within a brand experience setting. The findings suggest that the experience provided by a brand contributes to a process of individual meanings’ requalification. The emerging model try, finally, to design a new concept of brand experience.
10

SARTOR, MAUREEN A. "TESTING FOR DIFFERENTIALLY EXPRESSED GENES AND KEY BIOLOGICAL CATEGORIES IN DNA MICROARRAY ANALYSIS." University of Cincinnati / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1195656673.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Empirical p-Value":

1

Gill, Ofer, and Bud Mishra. "SEPA: Approximate Non-subjective Empirical p-Value Estimation for Nucleotide Sequence Alignment." In Computational Science – ICCS 2006, 638–45. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11758525_87.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Meyer, Vanessa, Sarah Lang, and Payam Dehdari. "Cargo-Hitching in Long-Distance Bus Transit: An Acceptance Analysis." In iCity. Transformative Research for the Livable, Intelligent, and Sustainable City, 77–89. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-92096-8_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThe combination of freight transport and mobility—also known as cargo-hitching—is a form of delivery that has been implemented in various modes of transport. This concept is already widely used in Europe, Africa and North America in long-distance bus transport and ensures parcel delivery via the cargo compartment of long-distance buses. This paper aims to investigate the acceptance of cargo-hitching in long-distance bus transport in Germany. For this purpose, first the term cargo-hitching is defined, and an overview of cargo-hitching concepts in long-distance bus transport worldwide is given. In the following, the principles of attitudinal acceptance are explained. A modified version of the UTAUT2 model was used as the basis for an empirical study in the form of a quantitative online survey (n = 245). The results provide information about factors influencing acceptance as well as wishes and requirements of potential users. Parts of the UTAUT2 model were verified by regression analysis. It was shown that the variables’ habit, price value, hedonic motivation, performance expectancy and social influence predict the behavioural intention to use cargo-hitching in our sample significantly (p < 0.05). Furthermore, risks, benefits and willingness to pay were determined, which could contribute to the development of a business model. These included measures to improve transparency, security and information flow of the cargo-hitching process.
3

Hartung, Joachim, Bärbel Elpelt-Hartung, and Guido Knapp. "A Modified Gauss Test for Correlated Samples with Application to Combining Dependent Tests or P-Values." In Empirical Economic and Financial Research, 145–57. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-03122-4_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ramos-Galarza, Carlos, Hugo Arias-Flores, Omar Cóndor-Herrera, and Janio Jadán-Guerrero. "Literacy Toy for Enhancement Phonological Awareness: A Longitudinal Study." In Lecture Notes in Computer Science, 371–77. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58805-2_44.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractIn this report it is presented the results of a longitudinal pre-experimental study, it was realized a technological intervention to stimulate the phonological awareness through a tangible reading toy based on the RFID technology, consisting of a teddy bear and 30 letters in 3D from the Spanish alphabet. This study started with a sample of 200 children, from them, there were selected 17 children aged between 6 and 7 years (Mage = 6.47, SD = .51) with a phonological disorder from an educative institution. The procedure consisted of obtaining pre-test and post-test values with the Evaluation of Phonological Awareness (PECFO). Sampling inclusion criteria considered children presenting problems of phonemes’ recognition and its relationship with graphemes. During 30 weeks it was realized an intervention with the technological toy and at the end of the sessions, it was applied the post-test. Results of phonological awareness showed statically significant differences among the pre (M = 12.88, SD = 3.53) and post-test (M = 17.17, SD = 2.96) this contributes to the empirical evidence of the intervened group improvement in this cognitive function t(16) = −3.67, p = .002. From this research it is projected proposing technological innovations contributing in the treatment of children’s cognitive difficulties.
5

Kraus, Jakob, Simon Brehm, Cameliu Himcinschi, and Jens Kortus. "Structural and Thermodynamic Properties of Filter Materials: A Raman and DFT Investigation." In Multifunctional Ceramic Filter Systems for Metal Melt Filtration, 111–34. Cham: Springer International Publishing, 2024. http://dx.doi.org/10.1007/978-3-031-40930-1_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThe contribution focuses on the accurate prediction of heat capacities for intermetallics, the estimation of reaction paths for coated and uncoated alumina foam filters in contact with metallic melts, and the investigation of thermally induced changes in various filters and filtercomponents. Density functional theory (DFT) was able to provide isobaric heat capacities for Al–Fe and Al–Fe-Si systems that outclassed the empirical Neumann–Kopp rule and matched the experimental values over a wide temperature range. Moreover, DFT calculations clarified that the formation of hercynite at the interface between alumina filters and steel melt was the result of a solid-state reaction involving high concentrations of FeO. Ex-situ Raman spectroscopy was used to compare carbon-bonded alumina filters using different bindersfrom Carbores®P to environmentally friendly lactose/tannin, as a function of heat treatment. For these carbon-bonded filters, the prominent D and G bands were used to confirm the existence of graphitization processes and determine the size of graphite clusters resulting from these processes. In order to investigate the pyrolysis processes occurring in selected binder constituents of the lactose/tannin filters, the evolution of Raman spectra with temperature was analyzed via in-situ measurements. Wherever it was appropriate, experimental Raman data were compared with DFT-simulated spectra. Further, Raman spectroscopy was used to study the thermally induced formation of metastable alumina, helping to understand the structural changes that take place during the transformation of boehmite (γ-AlO(OH)) to corundum (α-Al2O3) via metastable transition phases: γ-Al2O3, δ-Al2O3, and θ-Al2O3.
6

Bausell, R. Barker. "False-Positive Results and a Nontechnical Overview of Their Modeling." In The Problem with Science, 39–55. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780197536537.003.0003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This chapter explores three empirical concepts (the p-value, the effect size, and statistical power) integral to the avoidance of false positive scientific. Their relationship to reproducibility is explained in a nontechnical manner without formulas or statistical jargon, with p-values and statistical power presented in terms of probabilities from zero to 1.0 with the values of most interest to scientists being 0.05 (synonymous with a positive, hence, publishable result) and 0.80 (the most commonly recommended probability that a positive result will be obtained if the hypothesis that generated it was correct and the study will be properly designed and conducted). Unfortunately many scientists circumvent both by artifactually inflating the 0.05 criterion, overstating the available statistical power, and engaging in a number of other questionable research practices. These issues are discussed via statistical models from the genetic and psychological fields and then extended to a number of different p-values, statistical power levels, effect sizes, and the prevalence of “true,” effects expected to exist in the research literature. Among the basic conclusions of these modeling efforts are that employing more stringent p-values and larger sample sizes constitute the most effective statistical approaches for increasing the reproducibility of published results in all empirically based scientific literatures. This chapter thus lays the necessary foundation for understanding and appreciating the effects of appropriate p-values, sufficient statistical power, reaslistic effect sizes, and the avoidance of questionable research practices upon the production of reproducible results.
7

Barney, Jay B. "Strategic Factor Markets: Expectations, Luck, and Business Strategy." In Resources, Firms, And Strategies, 146–60. Oxford University PressOxford, 1997. http://dx.doi.org/10.1093/oso/9780198781806.003.0012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Research on corporate growth through acquisitions and mergers suggests the existence of markets for buying and selling companies (Porter 1980, p. 350; Schall 1972; Mossin 1973; Copeland and Weston 1979). Most empirical evidence seems to suggest that these markets are reasonably competitive. That is, the price an acquiring firm will generally have to pay to acquire a firm in these markets is approximately equal to the discounted present value of the acquired firm (Mandelkar 1974; Halpern 1973; Ellert 1976). Indeed, if above normal returns accrue to anyone in markets for companies, research seems to suggest that they will most likely go to the stockholders of the acquired, rather than the acquiring firms (Porter 1980, p. 352; Ellert 1976).
8

Reina, Rocco, Concetta Lucia Crtistofaro, Anna Maria Melina, and Marzia Ventura. "Cultural Organizations Push for Territory's Growth." In Advances in Business Strategy and Competitive Advantage, 138–59. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-2050-4.ch008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
If “entrepreneurship has become the engine of world economic and social development” (Audretsch, 2003, p.5), culture is becoming more and more a specific context in which is possible to invest and create new opportunities of labor and value. The principal aim of this contribution is to understand how it's possible for cultural organizations to influence the environment and local development. So, the work wants to highlight - through the analysis of an empirical case of success - what might be the indicators able to create virtuous relationships among cultural organizations and social and economic context. The work aims to contribute both theoretically and practically on the topic of cultural entrepreneurship. The results of this research can be utilized for further reflections in order to develop a framework with high practical relevance.
9

Smith, Gary. "Squeezing Blood from Rocks." In Distrust, 103–26. Oxford University PressOxford, 2023. http://dx.doi.org/10.1093/oso/9780192868459.003.0006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract The concept of statistical significance was developed by scientists to assess whether empirical results can be explained away by coincidence. However, a myopic focus on statistical significance fuels tenacious p-hacking quests to find models with low p-values, and models judged by p-values alone are likely to fare poorly with fresh data. When we are told that a medication, business practice, or government policy has statistically significant effects and then find that the effects vanish or are trivial when the recommendations are followed, this erodes the credibility of scientific research.
10

Targowski, Andrew. "Asymmetric Communication." In Information Technology and Societal Development, 345–62. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-004-2.ch015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This chapter defines a framework for the crosscultural communication process, including efficiency and cost. The framework provides some directions for dialogue among civilizations, which is one of the main routes toward creation of the universal civilization. A developed architectural design of the cross-cultural communication process is based on a universal system approach that not only considers the complexities of the various cultural hierarchies and their corresponding communication climates, but also compares and quantifies the cultural-specific attributes with the intention of increasing efficiency levels in crosscultural communication. The attributes for two selected cultures (Western-West and Egyptian) are estimated in a normative way using expert opinions, measuring on a scale from 1 to 5 with 5 as the best value. Quantifying cultural richness (R), cultural efficiency (?), modified cultural differences (DMC, and cultural ability (B) reflects how a given culture’s strength can overcome cultural differences and enhance its competitive advantage (V). Two components of the culture factor cost, explicit (CE) and implicit (CI), are defined, examined and quantified for the purposes not only of controlling the cost of doing business across cultures, but also to determine the amount of investment needed to overcome cultural differences in a global economy. In this new millennium, global organizations will increasingly focus on the critical value of the cross-cultural communication process, its efficiency, its competence, its cost of doing business. In order to successfully communicate crossculturally, knowledge and understanding of such cultural factors as values, attitudes, beliefs and behaviors should be acquired. Because culture is a powerful force that strongly influences communication behavior, culture and communication are inseparably linked. Worldwide, in the last 20 years, countries have experienced a phenomenal growth in international trade and foreign direct investment. Similarly, they have discovered the importance of crosscultural communication. As a result, practitioners and scholars are paying attention to the fact that cultural dimensions influence management practices (Hofstede, 1980; Child, 1981; Triandis, 1982; Adler, 1983; Laurent, 1983; Maruyama, 1984). In recent years, empirical work in the crosscultural arena has focused on the role of culture on employee behavior in communicating within business organizations (Tayeb, 1988). But current 346 Asymmetric Communication work on cross-cultural business communication has paid little attention to either (a) how to adapt these seminal works on general communication to the needs of intercultural business or (b) how to create new models more relevant to cross-cultural business exchanges (Limaye & Victor, 1991, p. 283). There are many focused empirical studies on cross-cultural communication between two specific cultures (e.g., Wong & Hildebrandt, 1983; Halpern, 1983; Victor, 1987; Eiler & Victor, 1988; Varner, 1988; Victor & Danak, 1990), but such results must be arguable when extrapolated across multiple cultures. The prevailing western classical linear and process models of communication (Shannon & Weaver, 1949; Berlo, 1960) neglect the complexity of cross-cultural communication. Targowski and Bowman (1988) developed a layer-based pragmatic communication process model which covered more variables than any previous model and indirectly addressed the role of cultural factors among their layer-based variables. In a similar manner, the channel ratio model for intercultural communication developed by Haworth and Savage (1989) has also failed to account completely for the multiple communication variables in cross-cultural environments. So far, there is no adequate model that can explain the cross-cultural communication process and efficiency, let alone estimate the cost of doing business with other cultures worldwide.

Тези доповідей конференцій з теми "Empirical p-Value":

1

Zhu, Yuliang, Jing Ma та Peipei Dong. "The Improvement of k-ε Model in the Turbulent Wave Boundary Layer". У ASME 2009 28th International Conference on Ocean, Offshore and Arctic Engineering. ASMEDC, 2009. http://dx.doi.org/10.1115/omae2009-79815.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Numerical model is one of the means for investigating turbulent wave boundary layer. Many scholars have used various eddy-viscosity models to simulate wave turbulent boundary layer flow. On the basis of analyzing existing models, the article uses more reasonable boundary condition to establish an advanced model of turbulent wave boundary layer by k-ε model. Past models have two problems. Firstly, the calculation area is not united since one of the calculation areas is all-water depth and another is boundary layer thickness. Aimed at this problem, this model makes a sensitivity analysis of velocity and eddy-viscosity for various calculation area, which turns out that velocity inside the boundary layer is low-sensitive while the eddy-viscosity is high-sensitive to the change of calculation area. Secondly, a new integration adjust coefficient p is presented to solve the five empirical constants which are difficult to adjust in k-ε model. Although these five empirical constants have recommended value, the universality is not good. In order to obtain better eddy-viscosity value, many methods were suggested to get these five empirical constants, however, most are very complicated. In this article, adjust coefficient p is put before the diffusion item in the velocity equation, and p is a little bit smaller than 1. The result indicates that a reasonable eddy-viscosity can be easily adjusted using this method. The modified model has overcome some shortcomings of the previous models, and gets a better simulation effect.
2

Nilforoush, R., J. Nilimaa, N. Bagge, A. Puurula, U. Ohlsson, M. Nilsson, G. Sas, and L. Elfgren. "Fracture Energy of Concrete for Bridge Assessment." In IABSE Symposium, Wroclaw 2020: Synergy of Culture and Civil Engineering – History and Challenges. Zurich, Switzerland: International Association for Bridge and Structural Engineering (IABSE), 2020. http://dx.doi.org/10.2749/wroclaw.2020.0692.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p>In numerical assessments of concrete bridges, the value of the concrete fracture energy GF plays an important role. However, mostly the fracture energy is only estimated based on the concrete compressive strength using empirical formulae. In order to study methods to determine the concrete fracture energy for existing bridges, tests were carried out on 55-year-old concrete from a bridge tested to failure in Kiruna in northern Sweden. Uniaxial tensile tests are performed on notched cylindrical concrete cores drilled out from this and other bridges. In the paper, different methods to determine the concrete fracture energy are discussed and recommendations are given for assessment procedures.</p>
3

Crawford, Victoria G. "Faster Guarantees of Evolutionary Algorithms for Maximization of Monotone Submodular Functions." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/229.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper, the monotone submodular maximization problem (SM) is studied. SM is to find a subset of size kappa from a universe of size n that maximizes a monotone submodular objective function f . We show using a novel analysis that the Pareto optimization algorithm achieves a worst-case ratio of (1 − epsilon)(1 − 1/e) in expectation for every cardinality constraint kappa < P , where P ≤ n + 1 is an input, in O(nP ln(1/epsilon)) queries of f . In addition, a novel evolutionary algorithm called the biased Pareto optimization algorithm, is proposed that achieves a worst-case ratio of (1 − epsilon)(1 − 1/e − epsilon) in expectation for every cardinality constraint kappa < P in O(n ln(P ) ln(1/epsilon)) queries of f . Further, the biased Pareto optimization algorithm can be modified in order to achieve a a worst-case ratio of (1 − epsilon)(1 − 1/e − epsilon) in expectation for cardinality constraint kappa in O(n ln(1/epsilon)) queries of f . An empirical evaluation corroborates our theoretical analysis of the algorithms, as the algorithms exceed the stochastic greedy solution value at roughly when one would expect based upon our analysis.
4

Wang, Siyang, Mofeng Qu, Huiqing Jiang, Yunjie Zhao, and Dong Yang. "Experimental Investigation on Heat Transfer and Frictional Characteristics of Vertical Upward Rifled Tube in the Advanced Ultra Supercritical Steam Generator." In ASME 2017 Power Conference Joint With ICOPE-17 collocated with the ASME 2017 11th International Conference on Energy Sustainability, the ASME 2017 15th International Conference on Fuel Cell Science, Engineering and Technology, and the ASME 2017 Nuclear Forum. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/power-icope2017-3292.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper experimentally studied the heat transfer and frictional characteristics of a single rifled tube with vertical upward flow under the parametric range of pressures P = 10.5–32 MPa, mass fluxes G = 300–1300 kg·m−2·s−1, heat fluxes q = 275–845 kW·m−2. The results show that in the subcritical pressure region, dryout is the predominant mode of heat transfer deterioration. In the near critical pressure region, departure from nuclear boiling (DNB) occurs when the q/G value increases. In the supercritical pressure region, heat transfer and frictional characteristics will be strongly influenced by the sharp changes of the thermophysical properties of supercritical water when the value of mass flux is approximately lower than 1000 kg · m−2 · s−1 in this experiment. The mass flux and the pressure are two crucial factors to the variations of total pressure drop and frictional pressure drop. An empirical correlation is selected to estimate the frictional pressure drop. The results indicate that in the low mass flux circumstances, the calculated value significantly underestimates the experimental data in the large specific heat region. Whereas, when the value of mass flux is larger than 1000 kg · m−2 · s−1, the calculated value agrees well with the experimental data. (CSPE)
5

Heričko, Tjaša, Boštjan Šumak, and Saša Brdnik. "Towards Representative Web Performance Measurements with Google Lighthouse." In 7th Student Computer Science Research Conference. University of Maribor Press, 2021. http://dx.doi.org/10.18690/978-961-286-516-0.9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Web performance testing with tools such as Google Lighthouse is a common task in software practice and research. However, variability in time-based performance measurement results is observed quickly when using the tool, even if the website has not changed. This can occur due to variability in the network, web, and client devices. In this paper, we investigated how this challenge was addressed in the existing literature. Furthermore, an experiment was conducted, highlighting how unrepresentative measurements can result from single runs; thus, researchers and practitioners are advised to run performance tests multiple times and use an aggregation value. Based on the empirical results, 5 consecutive runs using a median to aggregate results reduce variability greatly, and can be performed in a reasonable time. The study’s findings alert to p otential pitfalls when using single run-based measurement results and serve as guidelines for future use of the tool.
6

He, Xiang, and Kam K. Leang. "A New Quasi-Steady In-Ground Effect Model for Rotorcraft Unmanned Aerial Vehicles." In ASME 2019 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/dscc2019-9025.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract This paper introduces a new quasi-steady in-ground effect model for rotorcraft unmanned aerial vehicles to predict the aerodynamic behavior when the vehicle’s rotors approach ground plane. The model assumes that the compression of the outflow due to the presence of ground plane induces a change in the induced velocity that can drastically affect the thrust and power output. The new empirical model describes the change in thrust as a function of the distance to an obstacle for a rotor in hover condition. Using blade element theory and the method of image, the model parameters are described in terms of the rotor pitch angle and solidity. Experiments with off-the-shelf, fixed-pitch propellers and 3D-printed variable pitch propellers are carried out to validate the model. Experimental results suggest good agreement with 9.5% root-mean-square error (RMSE) and 97% p-value of statistic significance.
7

Silva, Thiago Geraldo, Luis Kin Miyatake, Rafael Madeira Barbosa, Andre Goncalves Medeiros, Otavio Ciribelli Borges, Marcia Cristina Oliveira, and Felipe Mauro Cardoso. "AI Based Water-in-Oil Emulsions Rheology Model for Value Creation in Deepwater Fields Production Management." In Offshore Technology Conference. OTC, 2021. http://dx.doi.org/10.4043/31173-ms.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract This work aims to present a new paradigm in the Exploration & Production (E&P) segment using Artificial Intelligence for rheological mapping of produced fluids and forecasting their properties throughout the production life cycle. The expected gain is to accelerate the process of prioritizing target fields for application of flow improvers and, as a consequence, to generate anticipation of revenue and value creation. Rheological data from laboratory analyses of water-in-oil emulsions from different production fields collected over the years are used in a machine learning framework, which enables a modeling based on supervised learning. The Artificial Intelligence infers the emulsion viscosity as a function of input parameters, such as API gravity, water cut and dehydrated oil viscosity. The modeling of emulsified fluids uses correlations that, in general, do not represent the viscosity emulsion suitably. Currently, an improvement over empirical correlations can be achieved via rheological characterization using tests from onshore laboratories, which have been generating a database for different Petrobras reservoirs over the years. The dataset used in the artificial intelligence framework results in a machine learning model with generalization ability, showing a good match between experimental and calculated data in both training and test datasets. This model is tested with a great deal of oils from different reservoirs, in an extensive range of API gravity, presenting a suitable mean absolute percentage error. In addition to that, the result preserves the expected physical behavior for the emulsion viscosity curve. Consequently, this approach eliminates frequent sampling requirements, which means lower logistical costs and faster actions in the decision making process with respect to flow improvers injection. Moreover, by embedding the AI model into a numerical flow simulation software, the overall flow model can estimate more reliably production curves due to better representation of the rheological fluid characteristics.
8

Banuti, Daniel T. "Supercritical Pseudo Boiling in Cubic Equations of State." In ASME Turbo Expo 2021: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/gt2021-58788.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Today, modern combustion systems and advanced cycles often reach operating pressures exceeding the working fluid’s or fuel’s critical pressure. While the liquid-gas coexistence line is the dominant feature in the fluid state space at low pressures, a supercritical analog to boiling, pseudo boiling, exists at supercritical pressures. Pseudo boiling is the transcritical state transition between supercritical liquid states and supercritical gaseous states, associated with peaks in heat capacity and thermal expansion. This transition occurs across a finite temperature interval. So far, the relation between the pseudo boiling line of tabulated hi-fi p-v-T data and the behavior of efficient engineering cubic equations of state (EOS) is unclear. In the present paper, we calculate the slope of the pseudo boiling line analytically from cubic equations of state. The Redlich-Kwong EOS leads to a constant value for all species, Peng-Robinson and Soave-Redlich-Kwong EOS yield a cubic dependency of the slope on the acentric factor. For more than twenty compounds with acentric factors ranging from −0.38 to 0.57 calculated slopes are compared with NIST data and vapor pressure correlations. Particularly the Peng-Robinson EOS matches reference data very well. Classical empirical values of Guggenheim or Plank & Riedel are obtained analytically. Then, pseudo boiling predictions of the Peng Robinson EOS are compared to NIST data. Deviations in transition temperature interval, and nondimensional parameters of the distributed latent heat are compared. Especially the different caloric behavior of tabulated fluid data for H2, N2, CO2, and H2O cannot be reproduced by the Peng Robinson EOS. These results may open the way towards new EOS with specific emphasis on Widom line and supercritical transition behavior.
9

Ohara, Junichi, and Shigeru Koyama. "Falling Film Evaporation of Binary Refrigerant Mixture in Vertical Rectangular Minichannels Consisting of Serrated-Fins." In ASME 2014 12th International Conference on Nanochannels, Microchannels, and Minichannels collocated with the ASME 2014 4th Joint US-European Fluids Engineering Division Summer Meeting. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/icnmm2014-22184.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The characteristics of heat transfer are investigated experimentally for the vertical falling film evaporation of binary refrigerant mixture HFC134a/HCFC123 in a rectangular minichannels consisting of offset strip fins. The refrigerant liquid is uniformly supplied to the channel through a distributor. The liquid flowing down vertically is heated electrically from the rear wall of the channel and evaporated. To observe the flow patterns during the evaporation process directly, the small circular window is set at the center of every section on the front wall. The experimental parameters are as follows: the mass velocity G = 28∼70 kg/(m2s), the heat flux q = 30∼50 kW/m2 and the pressure P ≈ 100∼260 kPa. In the case of large mass velocity G ≥ 55 kg/(m2s), the value of heat transfer coefficient becomes lower with increase of mass fraction of low-boiling component HFC134a wb in the region of x ≥ 0.3. The main reason for this inclination of α is considered that shearing force acts on the liquid-vapor interface becomes smaller because of vapor velocity suppressed by higher pressure in the test evaporator in the case of larger mass fraction of low-boiling component. Additionally, mass diffusion resistances formed on each side of vapor and liquid phase along the liquid-vapor interface are considered as a possible cause of reduction in the heat transfer coefficient α with increase of mass fraction wb. In the region of x ≥ 0.8, α descend rapidly despite the difference in the value of wb. It can be attributed to dry-out state of heat transfer area. Heat transfer coefficient derived from experiments is compared with that calculated from empirical correlation equation for heat transfer coefficient of pure refrigerant in a vertical falling film plate-fin evaporator.
10

Chaikovska, Olha, Oksana Palyliulko, Liudmyla Komarnitska, and Maryna Ikonnikova. "Impact of mindset activities on psychological well-being and efl skills of engineering students in wartime." In 22nd International Scientific Conference Engineering for Rural Development. Latvia University of Life Sciences and Technologies, Faculty of Engineering, 2023. http://dx.doi.org/10.22616/erdev.2023.22.tf057.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Currently, the educational system of Ukraine is experiencing hard times due to full-scale war. A literature review revealed that military actions harm the mental health of civilians and cause fear, stress, loneliness and burnout. Therefore, learning strategies in wartime should contain effective and psychologically comfortable assimilation of new knowledge. The present study aims to evaluate the efficacy of mindset activities in boosting the psychological well-being of engineering students and improving their EFL grammar and vocabulary skills. A two-group pre and post-test design was used. First-year engineering students were included in the study. PHQ-9 depression questionnaire was used to measure the rate of depression before and after treatment. The questionnaire responses in the experimental group before and after treatment showed that the number of students with moderate depression decreased to 15% after treatment and the percentage of students with minimal depression increased by 20%. The changes in mild depression were minor (5%). By contrast, the respondents’ answers to the pre-and post-treatment questionnaire in the control sample did not show significant shifts from severe to minimal rates of depression. To compare students’ scores in EFL pre- and post-grammar and vocabulary tests after applying mindset activities the Fisher’s Exact Test was used. Twenty-five students of the experimental group (83.3%) passed the test (n1 = 30) and twenty students (57.1%) succeeded in passing the test in the control sample (n2 = 35). The application of the test showed that the obtained empirical value φ (2.359) is higher than the critical value φ (1.64) at P < 0.05. Based on the data obtained, it can be concluded that employing mindset activities in EFL classes during wartime can help reduce the rate of depression among engineering students and improve EFL grammar and vocabulary skills.

Звіти організацій з теми "Empirical p-Value":

1

Lunsford, Kurt G., and Kenneth D. West. Random Walk Forecasts of Stationary Processes Have Low Bias. Federal Reserve Bank of Cleveland, August 2023. http://dx.doi.org/10.26509/frbc-wp-202318.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We study the use of a zero mean first difference model to forecast the level of a scalar time series that is stationary in levels. Let bias be the average value of a series of forecast errors. Then the bias of forecasts from a misspecified ARMA model for the first difference of the series will tend to be smaller in magnitude than the bias of forecasts from a correctly specified model for the level of the series. Formally, let P be the number of forecasts. Then the bias from the first difference model has expectation zero and a variance that is O(1/P²), while the variance of the bias from the levels model is generally O(1/P). With a driftless random walk as our first difference model, we confirm this theoretical result with simulations and empirical work: random walk bias is generally one-tenth to one-half that of an appropriately specified model fit to levels.
2

Araya, Mesele, Caine Rolleston, Pauline Rose, Ricardo Sabates, Dawit Tibebu Tiruneh, and Tassew Woldehanna. Understanding the Impact of Large-Scale Educational Reform on Students’ Learning Outcomes in Ethiopia: The GEQIP-II Case. Research on Improving Systems of Education (RISE), January 2023. http://dx.doi.org/10.35489/bsg-rise-wp_2023/125.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The Ethiopian education system has been very dynamic over recent years, with a series of large-scale education program interventions, such as the Second Phase of General Education Quality Improvement Project (GEQIP-II) that aimed to improve student learning outcomes. Despite the large-scale programs, empirical studies assessing how such interventions have worked and who benefited from the reforms are limited. This study aims to understand the impact of the reform on Grade 4 students’ maths learning outcomes over a school year using two comparable Grade 4 cohort students from 33 common schools in the Young Lives (YL, 2012-13) and RISE (2018-19) surveys. We employ matching techniques to estimate the effects of the reform by accounting for baseline observable characteristics of the two cohorts matched within the same schools. Results show that the RISE cohort started the school year with a lower average test score than the YL cohort. At the start of Grade 4, the Average Treatment Effect on the Treated (ATT) is lower by 0.36 SD (p<0.01). In terms of learning gain over the school year, however, the RISE cohort has shown a modestly higher value-added than the YL cohort, with ATT of 0.074 SD (p<0.05). The learning gain particularly is higher for students in rural schools (0.125 SD & p<0.05), which is also stronger among rural boys (0.184 SD & p<0.05) than among rural girls. We consider the implications of our results from a system dynamic perspective; in that the GEQIP-II reform induced unprecedented access to primary education, where the national Net Enrolment Rate (NER) rose from 85.7 percent in 2012-13 to 95.3 percent in 2019-20, which is equivalent to nearly 3 million additional learners to the primary education at a national level. This shows that learning levels have not increased in tandem with enrolment, and the unprecedented access for nearly all children might create pressure on the school system. Current policy efforts should therefore focus on sustaining learning gains for all children while creating better access.
3

Brosh, Arieh, David Robertshaw, Yoav Aharoni, Zvi Holzer, Mario Gutman, and Amichai Arieli. Estimation of Energy Expenditure of Free Living and Growing Domesticated Ruminants by Heart Rate Measurement. United States Department of Agriculture, April 2002. http://dx.doi.org/10.32747/2002.7580685.bard.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Research objectives were: 1) To study the effect of diet energy density, level of exercise, thermal conditions and reproductive state on cardiovascular function as it relates to oxygen (O2) mobilization. 2) To validate the use of heart rate (HR) to predict energy expenditure (EE) of ruminants, by measuring and calculating the energy balance components at different productive and reproductive states. 3) To validate the use of HR to identify changes in the metabolizable energy (ME) and ME intake (MEI) of grazing ruminants. Background: The development of an effective method for the measurement of EE is essential for understanding the management of both grazing and confined feedlot animals. The use of HR as a method of estimating EE in free-ranging large ruminants has been limited by the availability of suitable field monitoring equipment and by the absence of empirical understanding of the relationship between cardiac function and metabolic rate. Recent developments in microelectronics provide a good opportunity to use small HR devices to monitor free-range animals. The estimation of O2 uptake (VO2) of animals from their HR has to be based upon a consistent relationship between HR and VO2. The question as to whether, or to what extent, feeding level, environmental conditions and reproductive state affect such a relationship is still unanswered. Studies on the basic physiology of O2 mobilization (in USA) and field and feedlot-based investigations (in Israel) covered a , variety of conditions in order to investigate the possibilities of using HR to estimate EE. In USA the physiological studies conducted using animals with implanted flow probes, show that: I) although stroke volume decreases during intense exercise, VO2 per one heart beat per kgBW0.75 (O2 Pulse, O2P) actually increases and measurement of EE by HR and constant O2P may underestimate VO2unless the slope of the regression relating to heart rate and VO2 is also determined, 2) alterations in VO2 associated with the level of feeding and the effects of feeding itself have no effect on O2P, 3) both pregnancy and lactation may increase blood volume, especially lactation; but they have no effect on O2P, 4) ambient temperature in the range of 15 to 25°C in the resting animal has no effect on O2P, and 5) severe heat stress, induced by exercise, elevates body temperature to a sufficient extent that 14% of cardiac output may be required to dissipate the heat generated by exercise rather than for O2 transport. However, this is an unusual situation and its affect on EE estimation in a freely grazing animal, especially when heart rate is monitored over several days, is minor. In Israel three experiments were carried out in the hot summer to define changes in O2P attributable to changes in the time of day or In the heat load. The animals used were lambs and young calves in the growing phase and highly yielding dairy cows. In the growing animals the time of day, or the heat load, affected HR and VO2, but had no effect on O2P. On the other hand, the O2P measured in lactating cows was affected by the heat load; this is similar to the finding in the USA study of sheep. Energy balance trials were conducted to compare MEI recovery by the retained energy (RE) and by EE as measured by HR and O2P. The trial hypothesis was that if HR reliably estimated EE, the MEI proportion to (EE+RE) would not be significantly different from 1.0. Beef cows along a year of their reproductive cycle and growing lambs were used. The MEI recoveries of both trials were not significantly different from 1.0, 1.062+0.026 and 0.957+0.024 respectively. The cows' reproductive state did not affect the O2P, which is similar to the finding in the USA study. Pasture ME content and animal variables such as HR, VO2, O2P and EE of cows on grazing and in confinement were measured throughout three years under twenty-nine combinations of herbage quality and cows' reproductive state. In twelve grazing states, individual faecal output (FO) was measured and MEI was calculated. Regression analyses of the EE and RE dependent on MEI were highly significant (P<0.001). The predicted values of EE at zero intake (78 kcal/kgBW0.75), were similar to those estimated by NRC (1984). The EE at maintenance condition of the grazing cows (EE=MEI, 125 kcal/kgBW0.75) which are in the range of 96.1 to 125.5 as presented by NRC (1996 pp 6-7) for beef cows. Average daily HR and EE were significantly increased by lactation, P<0.001 and P<0.02 respectively. Grazing ME significantly increased HR and EE, P<0.001 and P<0.00l respectively. In contradiction to the finding in confined ewes and cows, the O2P of the grazing cows was significantly affected by the combined treatments (P<0.00l ); this effect was significantly related to the diet ME (P<0.00l ) and consequently to the MEI (P<0.03). Grazing significantly increased O2P compared to confinement. So, when EE of grazing animals during a certain season of the year is estimated using the HR method, the O2P must be re measured whenever grazing ME changes. A high correlation (R2>0.96) of group average EE and of HR dependency on MEI was also found in confined cows, which were fed six different diets and in growing lambs on three diets. In conclusion, the studies conducted in USA and in Israel investigated in depth the physiological mechanisms of cardiovascular and O2 mobilization, and went on to investigate a wide variety of ruminant species, ages, reproductive states, diets ME, time of intake and time of day, and compared these variables under grazing and confinement conditions. From these combined studies we can conclude that EE can be determined from HR measurements during several days, multiplied by O2P measured over a short period of time (10-15 min). The study showed that RE could be determined during the growing phase without slaughtering. In the near future the development microelectronic devices will enable wide use of the HR method to determine EE and energy balance. It will open new scopes of physiological and agricultural research with minimizes strain on animals. The method also has a high potential as a tool for herd management.

До бібліографії