Academic literature on the topic 'Economics – Statistical models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Economics – Statistical models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Economics – Statistical models"

1

de Paula, Áureo. "Econometric Models of Network Formation." Annual Review of Economics 12, no. 1 (August 2, 2020): 775–99. http://dx.doi.org/10.1146/annurev-economics-093019-113859.

Full text
Abstract:
This article provides a selective review of the recent literature on econometric models of network formation. I start with a brief exposition on basic concepts and tools for the statistical description of networks; then I offer a review of dyadic models, focusing on statistical models on pairs of nodes, and I describe several developments of interest to the econometrics literature. I also present a discussion of nondyadic models in which link formation might be influenced by the presence or absence of additional links, which themselves are subject to similar influences. This argument is related to the statistical literature on conditionally specified models and the econometrics of game theoretical models. I close with a (nonexhaustive) discussion of potential areas for further development.
APA, Harvard, Vancouver, ISO, and other styles
2

Rajan, Uday, Amit Seru, and Vikrant Vig. "Statistical Default Models and Incentives." American Economic Review 100, no. 2 (May 1, 2010): 506–10. http://dx.doi.org/10.1257/aer.100.2.506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wolak, Frank A., and A. Ronald Gallant. "Nonlinear Statistical Models." Journal of Business & Economic Statistics 6, no. 4 (October 1988): 518. http://dx.doi.org/10.2307/1391473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Robinson, P. M., J. Pfanzagl, and W. Wefelmeyer. "Asymptotic Expansions for General Statistical Models." Economica 54, no. 214 (May 1987): 268. http://dx.doi.org/10.2307/2554408.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bayón, L., and R. García-Rubio. "New computational and statistical models in science and economics." International Journal of Computer Mathematics 92, no. 9 (June 12, 2015): 1729–32. http://dx.doi.org/10.1080/00207160.2015.1049010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Canova, Fabio. "Statistical inference in calibrated models." Journal of Applied Econometrics 9, S1 (December 1994): S123—S144. http://dx.doi.org/10.1002/jae.3950090508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dewey, M., D. Clayton, and M. Hills. "Statistical Models in Epidemiology." Journal of the Royal Statistical Society. Series A (Statistics in Society) 158, no. 2 (1995): 343. http://dx.doi.org/10.2307/2983301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Branch, William A., Bruce McGough, and Mei Zhu. "Statistical sunspots." Theoretical Economics 17, no. 1 (2022): 291–329. http://dx.doi.org/10.3982/te3752.

Full text
Abstract:
This paper shows that belief‐driven economic fluctuations are a general feature of many determinate macroeconomic models. In environments with hidden state variables, forecast‐model misspecification can break the link between indeterminacy and sunspots by establishing the existence of “statistical sunspots” in models that have a unique rational expectations equilibrium. To form expectations, agents regress on a set of observables that can include serially correlated nonfundamental factors (e.g., sunspots, judgment, expectations shocks, etc.). In equilibrium, agents attribute, in a self‐fulfilling way, some of the serial correlation observed in data to extrinsic noise, i.e., statistical sunspots. This leads to sunspot equilibria in models with a unique rational expectations equilibrium. Unlike many rational sunspots, these equilibria are found to be generically stable under learning. Applications are developed in the context of a New Keynesian and an asset‐pricing model.
APA, Harvard, Vancouver, ISO, and other styles
9

Krebs, Tom. "Statistical Equilibrium in One-Step Forward Looking Economic Models." Journal of Economic Theory 73, no. 2 (April 1997): 365–94. http://dx.doi.org/10.1006/jeth.1996.2231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Consolo, Agostino, Carlo A. Favero, and Alessia Paccagnini. "On the statistical identification of DSGE models." Journal of Econometrics 150, no. 1 (May 2009): 99–115. http://dx.doi.org/10.1016/j.jeconom.2009.02.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Economics – Statistical models"

1

Tabri, Rami. "Emprical likelihood and constrained statistical inference for some moment inequality models." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=119408.

Full text
Abstract:
The principal purpose of this thesis is to extend empirical likelihood (EL) based procedures to some statistical models defined by unconditional moment inequalities. We develop EL procedures for two such models in the thesis. In the first type of model, the underlying probability distribution is the (infinite-dimensional) parameter of interest, and is defined by a continuum of moment inequalities indexed by a general class of estimating functions. We develop the EL estimation theory using a feasible-value-function approach, and demonstrate the uniform consistency of the estimator over the set of underlying distributions in the model. Furthermore, for large sample sizes, we prove that it has smaller mean integrated squared error than the estimator that ignores the information in the moment inequality conditions. We also develop computational algorithms for this estimator, and demonstrate its properties in Monte Carlo simulation experiments for the case of infinite-order stochastic dominance. The second type of moment inequality model concerns stochastic dominance (SD) orderings between two income distributions. We develop asymptotic and bootstrap empirical likelihood-ratio tests for the null hypothesis that a given unidirectional strong SD ordering between the income distributions holds. These distributions are discrete with finite support, and, therefore, the SD conditions are framed as sets of linear inequality constraints on the vector of SD curve ordinates. Testing for strong SD requires that we consider as the null model one that allows at most one pair of these ordinates to be equal at an interior point of their support. Finally, we study the performance of these tests in Monte Carlo simulations.
Le principal objectif de cette thèse est d'étendre les procédures basées sur la vraisemblance empirique (EL) à des modèles statistiques caractérisés par des inégalités concernant des moments non conditionnels. On développe dans la thèse des procédures EL pour deux types de modèles. Pour le premier type, le paramètre d'intérêt, de dimension infinie, est la distribution de probabilité sous-jacente, définie par un continuum d'inégalités en correspondance avec une classe générale de fonctions d'estimation. L'approche utilisée afin de développer la théorie d'estimation s'appuie sur une méthode faisable de calcul de la fonction objectif. Elle permet de démontrer la convergence uniforme de l'estimateur sur l'ensemble des distributions du modèle. On démontre en outre que, pour une taille d'échantillon suffisamment grande, son erreur quadratique moyenne est inférieure à celle d'un estimateur qui ne se sert pas des informations fournies par les inégalités. Des algorithmes numériques pour le calcul de l'estimateur sont développés, et employés dans des expériences de simulation afin d'étudier les propriétés de l'estimateur dans le contexte de la dominance stochastique à l'ordre infini. Le second type de modèle concerne la dominance stochastique (SD) entre deux distributions de revenus. On développe des tests asymptotiques et bootstrap, basés sur le rapport de vraisemblance empirique, pour l'hypothèse nulle selon laquelle il existe un ordre de dominance stochastique forte unidirectionnelle entre les deux distributions. Celles-ci sont discrètes et avec support fini, ce qui permet de formuler l'hypothèse nulle en termes de contraintes d'inégalité sur le vecteur d'ordonnées des courbes de dominance. La dominance forte exige que l'hypothèse nulle n'admette qu'une paire d'ordonnées égales dans l'intérieur du support. Les performances des tests sont enfin étudiées au moyen de simulations Monte Carlo.
APA, Harvard, Vancouver, ISO, and other styles
2

Chow, Fung-kiu, and 鄒鳳嬌. "Modeling the minority-seeking behavior in complex adaptive systems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B29367487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cutugno, Carmen. "Statistical models for the corporate financial distress prediction." Thesis, Università degli Studi di Catania, 2011. http://hdl.handle.net/10761/283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Grayson, James M. (James Morris). "Economic Statistical Design of Inverse Gaussian Distribution Control Charts." Thesis, University of North Texas, 1990. https://digital.library.unt.edu/ark:/67531/metadc332397/.

Full text
Abstract:
Statistical quality control (SQC) is one technique companies are using in the development of a Total Quality Management (TQM) culture. Shewhart control charts, a widely used SQC tool, rely on an underlying normal distribution of the data. Often data are skewed. The inverse Gaussian distribution is a probability distribution that is wellsuited to handling skewed data. This analysis develops models and a set of tools usable by practitioners for the constrained economic statistical design of control charts for inverse Gaussian distribution process centrality and process dispersion. The use of this methodology is illustrated by the design of an x-bar chart and a V chart for an inverse Gaussian distributed process.
APA, Harvard, Vancouver, ISO, and other styles
5

Valero, Rafael. "Essays on Sparse-Grids and Statistical-Learning Methods in Economics." Doctoral thesis, Universidad de Alicante, 2017. http://hdl.handle.net/10045/71368.

Full text
Abstract:
Compuesta por tres capítulos: El primero es un estudio sobre la implementación the Sparse Grid métodos para es el estudio de modelos económicos con muchas dimensiones. Llevado a cabo mediante aplicaciones noveles del método de Smolyak con el objetivo de favorecer la tratabilidad y obtener resultados preciso. Los resultados muestran mejoras en la eficiencia de la implementación de modelos con múltiples agentes. El segundo capítulo introduce una nueva metodología para la evaluación de políticas económicas, llamada Synthetic Control with Statistical Learning, todo ello aplicado a políticas particulares: a) reducción del número de horas laborales en Portugal en 1996 y b) reducción del coste del despido en España en 2010. La metodología funciona y se erige como alternativa a previos métodos. En términos empíricos se muestra que tras la implementación de la política se produjo una reducción efectiva del desempleo y en el caso de España un incremento del mismo. El tercer capítulo utiliza la metodología utiliza en el segundo capítulo y la aplica para evaluar la implementación del Tercer Programa Europeo para la Seguridad Vial (Third European Road Safety Action Program) entre otras metodologías. Los resultados muestran que la coordinación a nivel europeo de la seguridad vial a supuesto una ayuda complementaria. En el año 2010 se estima una reducción de víctimas mortales de entre 13900 y 19400 personal en toda Europa.
APA, Harvard, Vancouver, ISO, and other styles
6

Donnelly, James P. "NFL Betting Market: Using Adjusted Statistics to Test Market Efficiency and Build a Betting Model." Scholarship @ Claremont, 2013. http://scholarship.claremont.edu/cmc_theses/721.

Full text
Abstract:
The use of statistical analysis has been prevalent in the sports gambling industry for years. More recently, we have seen the emergence of "adjusted statistics", a more sophisticated way to examine each play and each result (further explanation below). And while adjusted statistics have become commonplace for professional and recreational bettors alike, little research has been done to justify their use. In this paper the effectiveness of this data is tested on the most heavily wagered sport in the world – the National Football League (NFL). The results are studied with two central questions in mind: Does the market account for the information provided by adjusted statistics? And, can this data be interpreted to create a profitable betting strategy? First, the Efficient Market Hypothesis is introduced and tested using these new variables. Then, a betting model is built and tested.
APA, Harvard, Vancouver, ISO, and other styles
7

Putnam, Kyle J. "Two Essays in Financial Economics." ScholarWorks@UNO, 2015. http://scholarworks.uno.edu/td/2010.

Full text
Abstract:
The following dissertation contains two distinct empirical essays which contribute to the overall field of Financial Economics. Chapter 1, entitled “The Determinants of Dynamic Dependence: An Analysis of Commodity Futures and Equity Markets,” examines the determinants of the dynamic equity-commodity return correlations between five commodity futures sub-sectors (energy, foods and fibers, grains and oilseeds, livestock, and precious metals) and a value-weighted equity market index (S&P 500). The study utilizes the traditional DCC model, as well as three time-varying copulas: (i) the normal copula, (ii) the student’s t copula, and (iii) the rotated-gumbel copula as dependence measures. Subsequently, the determinants of these various dependence measures are explored by analyzing several macroeconomic, financial, and speculation variables over different sample periods. Results indicate that the dynamic equity-commodity correlations for the energy, grains and oilseeds, precious metals, and to a lesser extent the foods and fibers, sub-sectors have become increasingly explainable by broad macroeconomic and financial market indicators, particularly after May 2003. Furthermore, these variables exhibit heterogeneous effects in terms of both magnitude and sign on each sub-sectors’ equity-commodity correlation structure. Interestingly, the effects of increased financial market speculation are found to be extremely varied among the five sub-sectors. These results have important implications for portfolio selection, price formation, and risk management. Chapter 2, entitled, “US Community Bank Failure: An Empirical Investigation,” examines the declining, but still pivotal role, of the US community banking industry. The study utilizes survival analysis to determine which accounting and macroeconomic variables help to predict community bank failure. Federal Deposit Insurance Corporation and Federal Reserve Bank data are utilized to compare 452 community banks which failed between 2000 and 2013, relative to a sample of surviving community banks. Empirical results indicate that smaller banks are less likely to fail than their larger community bank counterparts. Additionally, several unique bank-specific indicators of failure emerge which relate to asset quality and liquidity, as well as earnings ratios. Moreover, results show that the use of the macroeconomic indicator of liquidity, the TED spread, provides a substantial improvement in modeling predictive community bank failure.
APA, Harvard, Vancouver, ISO, and other styles
8

Ekiz, Funda. "Cagan Type Rational Expectations Model on Time Scales with Their Applications to Economics." TopSCHOLAR®, 2011. http://digitalcommons.wku.edu/theses/1126.

Full text
Abstract:
Rational expectations provide people or economic agents making future decision with available information and past experiences. The first approach to the idea of rational expectations was given approximately fifty years ago by John F. Muth. Many models in economics have been studied using the rational expectations idea. The most familiar one among them is the rational expectations version of the Cagans hyperination model where the expectation for tomorrow is formed using all the information available today. This model was reinterpreted by Thomas J. Sargent and Neil Wallace in 1973. After that time, many solution techniques were suggested to solve the Cagan type rational expectations (CTRE) model. Some economists such as Muth [13], Taylor [26] and Shiller [27] consider the solutions admitting an infinite moving-average representation. Blanchard and Kahn [28] find solutions by using a recursive procedure. A general characterization of the solution was obtained using the martingale approach by Broze, Gourieroux and Szafarz in [22], [23]. We choose to study martingale solution of CTRE model. This thesis is comprised of five chapters where the main aim is to study the CTRE model on isolated time scales. Most of the models studied in economics are continuous or discrete. Discrete models are more preferable by economists since they give more meaningful and accurate results. Discrete models only contain uniform time domains. Time scale calculus enables us to study on m-periodic time domains as well as non periodic time domains. In the first chapter, we give basics of time scales calculus and stochastic calculus. The second chapter is the brief introduction to rational expectations and the CTRE model. Moreover, many other solution techniques are examined in this chapter. After we introduce the necessary background, in the third chapter we construct the CTRE Model on isolated time scales. Then we give the general solution of this model in terms of martingales. We continue our work with defining the linear system and higher order CTRE on isolated time scales. We use Putzer Algorithm to solve the system of the CTRE Model. Then, we examine the existence and uniqueness of the solution of the CTRE model. In the fourth chapter, we apply our solution algorithm developed in the previous chapter to models in Finance and stochastic growth models in Economics.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Junyi. "A Normal Truncated Skewed-Laplace Model in Stochastic Frontier Analysis." TopSCHOLAR®, 2012. http://digitalcommons.wku.edu/theses/1177.

Full text
Abstract:
Stochastic frontier analysis is an exciting method of economic production modeling that is relevant to hospitals, stock markets, manufacturing factories, and services. In this paper, we create a new model using the normal distribution and truncated skew-Laplace distribution, namely the normal-truncated skew-Laplace model. This is a generalized model of the normal-exponential case. Furthermore, we compute the true technical efficiency and estimated technical efficiency of the normal-truncated skewed-Laplace model. Also, we compare the technical efficiencies of normal-truncated skewed-Laplace model and normal-exponential model.
APA, Harvard, Vancouver, ISO, and other styles
10

Bury, Thomas. "Collective behaviours in the stock market: a maximum entropy approach." Doctoral thesis, Universite Libre de Bruxelles, 2014. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209341.

Full text
Abstract:
Scale invariance, collective behaviours and structural reorganization are crucial for portfolio management (portfolio composition, hedging, alternative definition of risk, etc.). This lack of any characteristic scale and such elaborated behaviours find their origin in the theory of complex systems. There are several mechanisms which generate scale invariance but maximum entropy models are able to explain both scale invariance and collective behaviours.

The study of the structure and collective modes of financial markets attracts more and more attention. It has been shown that some agent based models are able to reproduce some stylized facts. Despite their partial success, there is still the problem of rules design. In this work, we used a statistical inverse approach to model the structure and co-movements in financial markets. Inverse models restrict the number of assumptions. We found that a pairwise maximum entropy model is consistent with the data and is able to describe the complex structure of financial systems. We considered the existence of a critical state which is linked to how the market processes information, how it responds to exogenous inputs and how its structure changes. The considered data sets did not reveal a persistent critical state but rather oscillations between order and disorder.

In this framework, we also showed that the collective modes are mostly dominated by pairwise co-movements and that univariate models are not good candidates to model crashes. The analysis also suggests a genuine adaptive process since both the maximum variance of the log-likelihood and the accuracy of the predictive scheme vary through time. This approach may provide some clue to crash precursors and may provide highlights on how a shock spreads in a financial network and if it will lead to a crash. The natural continuation of the present work could be the study of such a mechanism.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Economics – Statistical models"

1

1944-, Schofield Norman, ed. Advanced statistical methods in economics. Eastbourne: Holt Rinehart and Winston, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lewis, Margaret. Applied statistics for economists. New York: Routledge, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jean, Boivin. DSGE models in a data-rich environment. Cambridge, Mass: National Bureau of Economic Research, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Urbain, Jean-Pierre. Exogeneity in error correction models. Berlin: Springer-Verlag, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fuente, Angel de la. Mathematical methods and models for economists. Cambridge: Cambridge University Press, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

L, Bajari Patrick, and National Bureau of Economic Research., eds. Estimating static models of strategic interaction. Cambridge, Mass: National Bureau of Economic Research, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dynamics of markets: The new financial economics. 2nd ed. New York: Cambridge University Press, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mathematical methods and models for economists. Cambridge, UK: Cambridge University Press, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Myerson, Roger B. Probability models for economic decisions. Belmont, CA: Thomson/Brooke/Cole, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

N, Goh T., and Kuralmani V, eds. Statistical models and control charts for high-quality processes. Boston: Kluwer Academic Publishers, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Economics – Statistical models"

1

Snijders, Tom, and Marijtje van Duijn. "Simulation for Statistical Inference in Dynamic Network Models." In Lecture Notes in Economics and Mathematical Systems, 493–512. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/978-3-662-03366-1_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lux, Thomas. "Masanao Aoki’s Solution to the Finite Size Effect of Behavioral Finance Models." In Complexity, Heterogeneity, and the Methods of Statistical Physics in Economics, 67–76. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-4806-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Buckmann, Marcus, Andreas Joseph, and Helena Robertson. "Opening the Black Box: Machine Learning Interpretability and Inference Tools with an Application to Economic Forecasting." In Data Science for Economics and Finance, 43–63. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-66891-4_3.

Full text
Abstract:
AbstractWe present a comprehensive comparative case study for the use of machine learning models for macroeconomics forecasting. We find that machine learning models mostly outperform conventional econometric approaches in forecasting changes in US unemployment on a 1-year horizon. To address the black box critique of machine learning models, we apply and compare two variables attribution methods: permutation importance and Shapley values. While the aggregate information derived from both approaches is broadly in line, Shapley values offer several advantages, such as the discovery of unknown functional forms in the data generating process and the ability to perform statistical inference. The latter is achieved by the Shapley regression framework, which allows for the evaluation and communication of machine learning models akin to that of linear models.
APA, Harvard, Vancouver, ISO, and other styles
4

Schnedler, Wendelin. "Statistical Model and Empirical Evidence." In Contributions to Economics, 89–120. Heidelberg: Physica-Verlag HD, 2004. http://dx.doi.org/10.1007/978-3-7908-2706-4_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Vandin, Andrea, Daniele Giachini, Francesco Lamperti, and Francesca Chiaromonte. "MultiVeStA: Statistical Analysis of Economic Agent-Based Models by Statistical Model Checking." In From Data to Models and Back, 3–6. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-16011-0_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Varriale, Roberta, Fabiana Rocci, and Orietta Luzi. "Total Process Error framework: an application to economic statistical registers." In Proceedings e report, 147–51. Florence: Firenze University Press, 2021. http://dx.doi.org/10.36253/978-88-5518-461-8.28.

Full text
Abstract:
In recent years, the Italian national institute of statistics (Istat), together with most National Statistical Institutes, is progressively moving from traditional production models based on the use of primary source of information - represented by direct surveys - to new production strategies based on the combined use of different primary and secondary sources of information. As result, new multisource statistical processes have been built, that guarantee a major improvement of both amount and quality of information about several phenomena of public interest. In this context, the Total Process Error (TPE) framework has been recently proposed in literature for assessing the quality of multisource processes. The TPE framework represents an evolution of the Zhang’s two-phase life-cycle approach and it additionally includes an operational tool to connect the steps of the multisource production process to the phases of the quality evaluation framework. TPE framework can be used both to support a multisource process design and to monitor an entire production process, in order to provide key elements to assess the quality of both the processes and their statistical outputs. In the present work, we describe as a case study in the new context of Istat production of official statistics the use of the TPE framework to support the process design of the Register for Public Administrations.
APA, Harvard, Vancouver, ISO, and other styles
7

Price, Colin. "The Statistical Basis of Valuation: The Hedonic House Price Model." In Landscape Economics, 223–48. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-54873-9_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sarker, Mitalee, and Stefan Wesner. "Statistical Model Based Cloud Resource Management." In Economics of Grids, Clouds, Systems, and Services, 107–15. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-13342-9_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nguyen, Bao Hoang, Robin C. Sickles, and Valentin Zelenyuk. "Efficiency Analysis with Stochastic Frontier Models Using Popular Statistical Softwares." In Advances in Economic Measurement, 129–71. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2023-3_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jia, Daniel Lukui. "Data, Statistics and Stylized Facts." In Dynamic Macroeconomic Models in Emerging Market Economies, 185–92. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-4588-7_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Economics – Statistical models"

1

Hall, Russell K. "Evaluating Resource Plays with Statistical Models." In Hydrocarbon Economics and Evaluation Symposium. Society of Petroleum Engineers, 2007. http://dx.doi.org/10.2118/107435-ms.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Stachová, Mária, and Pavol Kráľ. "Statistical Learning Methods in Corporate Financial Distress Prediction of Slovak Enterprises: Comparison of Alternative Models." In International Days of Statistics and Economics 2019. Libuše Macáková, MELANDRIUM, 2019. http://dx.doi.org/10.18267/pr.2019.los.186.144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Goldberg, Karin, and Lucas Goldberg Da Rosa. "APPLYING STATISTICAL ANALYSIS AND ECONOMICS MODELS TO UNSCRAMBLE THE DEPOSITIONAL SIGNALS FROM CHEMICAL PROXIES IN BLACK SHALES." In GSA Connects 2022 meeting in Denver, Colorado. Geological Society of America, 2022. http://dx.doi.org/10.1130/abs/2022am-378672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Adámek, Pavel, and Lucie Meixnerová. "Changes and Adaptations of Business Models Caused by the Crisis Scenario." In Seventh International Scientific-Business Conference LIMEN Leadership, Innovation, Management and Economics: Integrated Politics of Research. Association of Economists and Managers of the Balkans, Belgrade, Serbia, 2021. http://dx.doi.org/10.31410/limen.s.p.2021.9.

Full text
Abstract:
Due to the fast-changing environment caused by the impact of the pandemic, a response to companies’ behavior is inevitable. These pan­demic crisis scenario triggers searching for changes, adjustment, and adap­tation of business models to seek new opportunities for competitive advan­tage. Therefore, the paper aims to analyze, identify and evaluate the impact of a pandemic on a firm´s business model, specifically to changes in its busi­ness elements. The research methodology applies a statistical apparatus mainly the Mann-Whitney U test, using the econometric software EViews for identifying the significance of individual business model elements within national economy sectors and branches before the pandemic and the cur­rent post-pandemic crisis. Data were obtained from 173 Czech and Slovak companies’ owners (executives). The findings represent the perception and view of businesses on the current post-pandemic crisis and their priorities changes in specific elements of business model
APA, Harvard, Vancouver, ISO, and other styles
5

Horák, Jakub, Petr Šuleř, and Jaromír Vrbka. "Comparison of neural networks and regression time series when predicting the export development from the USA to PRC." In Contemporary Issues in Business, Management and Economics Engineering. Vilnius Gediminas Technical University, 2019. http://dx.doi.org/10.3846/cibmee.2019.017.

Full text
Abstract:
Purpose – artificial neural networks are compared with mixed conclusions in terms of forecasting performance. The most researches indicate that deep-learning models are better than traditional statistical or mathematical models. The purpose of the article is to compare the accuracy of equalizing time series by means of regression analysis and neural networks on the example of the USA export to China. The aim is to show the possible uses and advantages of neural networks in practice. Research methodology – the period for which the data (USA export to the PRC) are available is the monthly balance starting from January 1985 to August 2018. First of all, linear regression as the relatively simple mathematical method is carried out. Subsequently, neural networks as the computational models used in artificial intelligence are used for regression. Findings – in terms of linear regression, the most suitable one appeared to be the curve obtained by means of the least squares methods by negative-exponential smoothing, and the curve obtained by means of the distance-weighted least squares method. In terms of neural networks, all retained structures appeared to be applicable in practice. Artificial neural networks have better representational power than traditional models. Research limitations – the simplification (quite a significant one) appears both in the cases of linear regression and regression by means of neural networks. We work only with two variables – input variable (time) and output variable (USA export to the PRC). Practical implications – in practice, the results – especially the method of artificial neural networks – can be used in the measurement and prediction of the development of exports, but especially in the short term. It can be stated that due to great simplification of the reality it isnʼt possible to predict extraordinary situations and their effect on the USA export to the PRC. Originality/Value – the article focuses on the comparison of two statistical methods, in particular, artificial intelligence is not used in such applications. However, in many economic industries, it has proven better results. It is found that artificial neural networks are able to effectively learn dependencies in and between the time series in the form of export development data.
APA, Harvard, Vancouver, ISO, and other styles
6

Karagiannidis, Pavlos, and Nikolaos Themelis. "Data-Driven Ship Propulsion Modeling with Artificial Neural Networks." In SNAME 7th International Symposium on Ship Operations, Management and Economics. SNAME, 2021. http://dx.doi.org/10.5957/some-2021-011.

Full text
Abstract:
The paper examines data-driven techniques for the modeling of ship propulsion that could support a strategy for the reduction of emissions and be utilized for the optimization of a fleet’s operations. A large, high-frequency and automated collected data set is exploited for producing models that estimate the required shaft power or main engine’s fuel consumption of a container ship sailing under arbitrary conditions. A variety of statistical calculations and algorithms for data processing are implemented and state-of-the-art techniques for training and optimizing Feed-Forward Neural Networks (FNNs) are applied. Emphasis is given in the pre-processing of the data and the results indicate that with a proper filtering and preparation stage it is possible to significantly increase the model’s accuracy. Thus, increase our prediction ability and our awareness regarding the ship's hull and propeller actual condition.
APA, Harvard, Vancouver, ISO, and other styles
7

Liodorova, Julija, and Irina Voronova. "Z-score and P-score for bankruptcy fraud detection: a case of the construction sector in Latvia." In Contemporary Issues in Business, Management and Economics Engineering. Vilnius Gediminas Technical University, 2019. http://dx.doi.org/10.3846/cibmee.2019.029.

Full text
Abstract:
To protect investment and ensure repayment of payables, recent studies have focused on identifying the relationships between company bankruptcy and internal fraud. The P-score model that is based on the most popular Altman Z-score model has been developed to indicate the manipulation of financial statements. Purpose of the study is to determinate the accuracy and the feasibility of P-score and Z-score models to detect fraudulent bankruptcy in regional conditions, based on reports of the Latvian construction companies that failed due to fraud, and during the verification of other known data. Research methodology is based on the background studies of P-score testifying, applying this approach to the Latvian condition. The present study analyzes the behaviour of the two models in identifying distress and fraud. To testify the results of the study, the authors use the financial analysis methods, comparison, statistical and quantitative research methods. Findings have shown the possibility of using the P-score and Z-score technique for bankruptcy fraud detection at the Latvian companies, based on the construction sector samples. The accuracy of the method is above 80%. Research limitations – acquisition a large amount of data on companies that are in the process of analytical studies on the recognition of their insolvency and having signs of fraud is not possible due to the confidentiality of information. Practical implications – the results of the study may be applicable to the audit of the company, investment reliability assessment, partnership evaluation and economic examination to detect fraud. Originality/Value of the study is the first test of practical implication of P-score model in Latvia and the Baltic countries on the samples of small and medium-sized construction companies. The authors propose improving the coefficients of the P-score model taking into account the requirements for financial statements in Latvia
APA, Harvard, Vancouver, ISO, and other styles
8

Tonković Pražić, Ivana. "INFLUENCE OF PERSONAL VALUES ON CONSUMER CHOICE AND INTENTION TO BUY: A CASE OF CROATIAN AUTOMOBILE MARKET." In Fourth International Scientific Conference ITEMA Recent Advances in Information Technology, Tourism, Economics, Management and Agriculture. Association of Economists and Managers of the Balkans, Belgrade, Serbia, 2020. http://dx.doi.org/10.31410/itema.s.p.2020.117.

Full text
Abstract:
This paper aimed to identify the factors and segments of car buyers based on their personal values and analyzing their relation to car buyers’ choice and intention to buy. A survey involving 561 participants was conducted using the PVQ scale and additional questions about car-buying behavior. Upon collecting the data, statistical analysis was conducted that allowed for nine value types to be successfully distinguished among car buyers: benevolence, universalism, self-direction, stimulation, hedonism, achievement, and power, security, conformity, and tradition. Additionally, based on the abovementioned value types, different consumer segments were distinguished: "opened to change", "self-transcendent", "self-enhanced" and "conservative". Furthermore, the results show that segments of car buyers differ in their preferences of car models, i.e. they choose or intend to buy different car models. The conclusion presents the contribution of the paper, limitations, and guidelines for future research.
APA, Harvard, Vancouver, ISO, and other styles
9

Trusina, Inese, Elita Jermolajeva, and Biruta Sloka. "Analysis of energy resources’ flows as the sustainable development parameters." In 23rd International Scientific Conference. “Economic Science for Rural Development 2022”. Latvia University of Life Sciences and Technologies. Faculty of Economics and Social Development, 2022. http://dx.doi.org/10.22616/esrd.2022.56.025.

Full text
Abstract:
Global challenges require a transition from the existing linear economic model to models that will consider nature as a life support system for the development on the way to social well-being in the frame of the ecological economics paradigm. The article presents results of the development of formalizing the sustainable development monitoring using the concept of energy flows in open non-equilibrium stable socio-economic complex systems in the frame of ecological economics approach. The authors calculated and used a new system of universal parameters of sustainable development: total consumption of energy resources, total production, power losses and impact on environment, technological excellence. Level of human life was defined as a function of total production, environment changing, population changing and technological efficiency level. In context of considering approach, universal parameters were calculated using the data of Eurostat during the period from 1990 to 2019 and using statistical analyses methods. The main results: definition of the type and structure of the final consumption models (theoretical trends) on energy resources time series; allocation of stationary and non-stationary components in time series; calculation and primary interpretation of the system of basic parameters of a sustainable for the countries of Europe - Latvia, Lithuania, Estonia, Slovakia and Bulgaria. The countries as objects of research were selected in accordance with the following parameters: the population of each state is not more than 10 million and membership of the EU since 2004. The results of the research have indicated that there are several challenges in the analysed countries with several similarities, and there are possibilities to share and use the experience of energy resources flows approach for data analysis.
APA, Harvard, Vancouver, ISO, and other styles
10

Ilić-Kosanović, Tatjana, and Damir Ilić. "ONLINE CLASSES’ EFFECTS DURING COVID 19 LOCKDOWN - TEACHERS’ VS. STUDENTS’ PERSPECTIVE, CASE OF THE SCHOOL OF ENGINEERING MANAGEMENT." In Sixth International Scientific-Business Conference LIMEN Leadership, Innovation, Management and Economics: Integrated Politics of Research. Association of Economists and Managers of the Balkans, Belgrade, Serbia, 2020. http://dx.doi.org/10.31410/limen.s.p.2020.101.

Full text
Abstract:
In the second decade of the 21st century, there is an ongoing discussion on the value of online classes in higher education as the implementation of new technologies in the higher education processes is on the rise. The main questions that are emerging are the level of interactions, quality of knowledge transfer, and development of critical thinking. Several previously conducted research concluded that online models of higher education teaching add more value than traditional methods, and some of the research has shown the shortcomings of online higher education programs. The pandemic of Covid-19 disease caused by a Corona Virus (SARS-CoV-2) has forced most of the higher education institutions in Europe to transfer almost the entire educational process to online platforms. In this paper, the satisfaction of the teachers and the students with the online classes’ effectiveness regarding the teacher-student communication, knowledge transfer, and development of critical thinking in the case of the School of Engineering Management in Belgrade, Serbia, is researched through a short survey and interviews. Statistical analysis has shown that there is a statistically significant difference between students' and teachers' satisfaction. Furthermore, in short interviews, it is shown that the students are more receptive to knowledge transfer, teacher-student communication, and the development of critical thinking through online classes than the professors. As the sample is small, further empirical research on the wider sample is needed in order to get more compelling conclusions.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Economics – Statistical models"

1

Соловйов, Володимир Миколайович, and D. N. Chabanenko. Financial crisis phenomena: analysis, simulation and prediction. Econophysic’s approach. Гумбольдт-Клуб Україна, November 2009. http://dx.doi.org/10.31812/0564/1138.

Full text
Abstract:
With the beginning of the global financial crisis, which attracts the attention of the international community, the inability of existing methods to predict the events became obvious. Creation, testing, adaptation of the models to the concrete financial market segments for the purpose of monitoring, early prediction, prevention and notification of financial crises is gaining currency nowadays. Econophysics is an interdisciplinary research field, applying theories and methods originally developed by physicists in order to solve problems in economics, usually those including uncertainty or stochastic processes and nonlinear dynamics. Its application to the study of financial markets has also been termed statistical finance referring to its roots in statistical physics. The new paradigm of relativistic quantum econophysics is proposed.
APA, Harvard, Vancouver, ISO, and other styles
2

Hlushak, Oksana M., Svetlana O. Semenyaka, Volodymyr V. Proshkin, Stanislav V. Sapozhnykov, and Oksana S. Lytvyn. The usage of digital technologies in the university training of future bachelors (having been based on the data of mathematical subjects). [б. в.], July 2020. http://dx.doi.org/10.31812/123456789/3860.

Full text
Abstract:
This article demonstrates that mathematics in the system of higher education has outgrown the status of the general education subject and should become an integral part of the professional training of future bachelors, including economists, on the basis of intersubject connection with special subjects. Such aspects as the importance of improving the scientific and methodological support of mathematical training of students by means of digital technologies are revealed. It is specified that in order to implement the task of qualified training of students learning econometrics and economic and mathematical modeling, it is necessary to use digital technologies in two directions: for the organization of electronic educational space and in the process of solving applied problems at the junction of the branches of economics and mathematics. The advantages of using e-learning courses in the educational process are presented (such as providing individualization of the educational process in accordance with the needs, characteristics and capabilities of students; improving the quality and efficiency of the educational process; ensuring systematic monitoring of the educational quality). The unified structures of “Econometrics”, “Economic and mathematical modeling” based on the Moodle platform are the following ones. The article presents the results of the pedagogical experiment on the attitude of students to the use of e-learning course (ELC) in the educational process of Borys Grinchenko Kyiv University and Alfred Nobel University (Dnipro city). We found that the following metrics need improvement: availability of time-appropriate mathematical materials; individual approach in training; students’ self-expression and the development of their creativity in the e-learning process. The following opportunities are brought to light the possibilities of digital technologies for the construction and research of econometric models (based on the problem of dependence of the level of the Ukrainian population employment). Various stages of building and testing of the econometric model are characterized: identification of variables, specification of the model, parameterization and verification of the statistical significance of the obtained results.
APA, Harvard, Vancouver, ISO, and other styles
3

Соловйов, В. М., В. В. Соловйова, and Д. М. Чабаненко. Динаміка параметрів α-стійкого процесу Леві для розподілів прибутковостей фінансових часових рядів. ФО-П Ткачук О. В., 2014. http://dx.doi.org/10.31812/0564/1336.

Full text
Abstract:
Modem market economy of any country cannot successfully behave without the existence of the effective financial market. In the conditions of growing financial market, it is necessary to use modern risk-management methods, which take non-gaussian distributions into consideration. It is known, that financial and economic time series return’s distributions demonstrate so-called «heavy tails», which interrupts the modeling o f these processes with classical statistical methods. One o f the models, that is able to describe processes with «heavy tails», are the а -stable Levi processes. They can slightly simulate the dynamics of the asset prices, because it consists o f two components: the Brownian motion component and jump component. In the current work the usage of model parameters estimation procedure is proposed, which is based on the characteristic functions and is applied for the moving window for the purpose of financial-economic system’ s state monitoring.
APA, Harvard, Vancouver, ISO, and other styles
4

Kim, Changmo, Ghazan Khan, Brent Nguyen, and Emily L. Hoang. Development of a Statistical Model to Predict Materials’ Unit Prices for Future Maintenance and Rehabilitation in Highway Life Cycle Cost Analysis. Mineta Transportation Institute, December 2020. http://dx.doi.org/10.31979/mti.2020.1806.

Full text
Abstract:
The main objectives of this study are to investigate the trends in primary pavement materials’ unit price over time and to develop statistical models and guidelines for using predictive unit prices of pavement materials instead of uniform unit prices in life cycle cost analysis (LCCA) for future maintenance and rehabilitation (M&R) projects. Various socio-economic data were collected for the past 20 years (1997–2018) in California, including oil price, population, government expenditure in transportation, vehicle registration, and other key variables, in order to identify factors affecting pavement materials’ unit price. Additionally, the unit price records of the popular pavement materials were categorized by project size (small, medium, large, and extra-large). The critical variables were chosen after identifying their correlations, and the future values of each variable were predicted through time-series analysis. Multiple regression models using selected socio-economic variables were developed to predict the future values of pavement materials’ unit price. A case study was used to compare the results between the uniform unit prices in the current LCCA procedures and the unit prices predicted in this study. In LCCA, long-term prediction involves uncertainties due to unexpected economic trends and industrial demand and supply conditions. Economic recessions and a global pandemic are examples of unexpected events which can have a significant influence on variations in material unit prices and project costs. Nevertheless, the data-driven scientific approach as described in this research reduces risk caused by such uncertainties and enables reasonable predictions for the future. The statistical models developed to predict the future unit prices of the pavement materials through this research can be implemented to enhance the current LCCA procedure and predict more realistic unit prices and project costs for the future M&R activities, thus promoting the most cost-effective alternative in LCCA.
APA, Harvard, Vancouver, ISO, and other styles
5

Khvostina, Inesa, Serhiy Semerikov, Oleh Yatsiuk, Nadiia Daliak, Olha Romanko, and Ekaterina Shmeltser. Casual analysis of financial and operational risks of oil and gas companies in condition of emergent economy. [б. в.], October 2020. http://dx.doi.org/10.31812/123456789/4120.

Full text
Abstract:
The need to control the risk that accompanies businesses in their day- to-day operations, and at the same time changing economic conditions make risk management an almost indispensable element of economic life. Selection of the main aspects of the selected phases of the risk management process: risk identification and risk assessment are related to their direct relationship with the subject matter (risk identification to be managed; risk analysis leading to the establishment of a risk hierarchy, and, consequently, the definition of risk control’ methods) and its purpose (bringing the risk to acceptable level). It is impossible to identify the basic patterns of development of the oil and gas industry without exploring the relationship between economic processes and enterprise risks. The latter are subject to simulation, and based on models it is possible to determine with certain probability whether there have been qualitative and quantitative changes in the processes, in their mutual influence on each other, etc. The work is devoted to exploring the possibilities of applying the Granger test to examine the causal relationship between the risks and obligations of oil and gas companies. The analysis is based on statistical tests and the use of linear regression models.
APA, Harvard, Vancouver, ISO, and other styles
6

Araujo, María Caridad, Marta Rubio-Codina, and Norbert Schady. 70 to 700 to 70,000: Lessons from the Jamaica Experiment. Inter-American Development Bank, April 2021. http://dx.doi.org/10.18235/0003210.

Full text
Abstract:
This document compares three versions of the same home visiting model, the well-known Jamaica model, which was gradually scaled-up from an efficacy trial (proof of concept) in Jamaica, to a pilot in Colombia, to an at-scale program in Peru. It first describes the design, implementation and impacts of these three programs. Then, it analyzes the threats to scalability in each of these experiences and discusses how they could have affected program outcomes, with a focus on three of the elements of the economic model of scaling in Al-Ubaydli, et al. (Forthcoming): appropriate statistical inference, properties of the population, and properties of the situation. The document reflects on the lessons learned to mitigate the threats to scalability and on how research and evaluation can be better aligned to facilitate and support the scaling-up process of early child development interventions. It points out those attributes that interventions must maintain to ensure effectiveness at scale. Similarly, political support is also identified as indispensable.
APA, Harvard, Vancouver, ISO, and other styles
7

Soloviev, V., and V. Solovieva. Quantum econophysics of cryptocurrencies crises. [б. в.], 2018. http://dx.doi.org/10.31812/0564/2464.

Full text
Abstract:
From positions, attained by modern theoretical physics in understanding of the universe bases, the methodological and philosophical analysis of fundamental physical concepts and their formal and informal connections with the real economic measuring is carried out. Procedures for heterogeneous economic time determination, normalized economic coordinates and economic mass are offered, based on the analysis of time series, the concept of economic Plank's constant has been proposed. The theory has been approved on the real economic dynamic's time series, related to the cryptocurrencies market, the achieved results are open for discussion. Then, combined the empirical cross-correlation matrix with the random matrix theory, we mainly examine the statistical properties of cross-correlation coefficient, the evolution of average correlation coefficient, the distribution of eigenvalues and corresponding eigenvectors of the global cryptocurrency market using the daily returns of 15 cryptocurrencies price time series across the world from 2016 to 2018. The result indicated that the largest eigenvalue reflects a collective effect of the whole market, practically coincides with the dynamics of the mean value of the correlation coefficient and very sensitive to the crisis phenomena. It is shown that both the introduced economic mass and the largest eigenvalue of the matrix of correlations can serve as quantum indicator-predictors of crises in the market of cryptocurrencies.
APA, Harvard, Vancouver, ISO, and other styles
8

López-Piñeros, Martha Rosalba, Norberto Rodríguez-Niño, and Miguel Sarmiento. Política monetaria y flujos de portafolio en una economía de mercado emergente. Banco de la República de Colombia, May 2022. http://dx.doi.org/10.32468/be.1200.

Full text
Abstract:
Portfolio flows are an important source of funding for both private and public agents in emerging market economies. In this paper, we study the influence of changes in domestic and US monetary policy rates on portfolio inflows in an emerging market economy and discriminate among fixed income instruments (government securities and other corporate bonds) and variable income instruments (shares). We employ monthly data on portfolio inflows of non-residents in Colombia during the period 2011-2020 and identify the monetary policy shocks using a SVAR model with long-run restrictions. We find a positive and statistically significant response of portfolio inflows in government securities and corporate bonds to changes in both domestic and US monetary policy rates. Portfolio inflows in the stock market react more to changes in the inflation rate and do not react to changes in monetary policy rates. Our findings are consistent with the predictions of the interest rate channel and reestablish the predominant role of inflation rate in driving portfolio inflows. The results suggest that domestic and US monetary policy actions have an important effect on the behavior of portfolio inflows in emerging economies.
APA, Harvard, Vancouver, ISO, and other styles
9

Kingston, A. W., A. Mort, C. Deblonde, and O H Ardakani. Hydrogen sulfide (H2S) distribution in the Triassic Montney Formation of the Western Canadian Sedimentary Basin. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/329797.

Full text
Abstract:
The Montney Formation is a highly productive hydrocarbon reservoir with significant reserves of hydrocarbon gases and liquids making it of great economic importance to Canada. However, high concentrations of hydrogen sulfide (H2S) have been encountered during exploration and development that have detrimental effects on environmental, health, and economics of production. H2S is a highly toxic and corrosive gas and therefore it is essential to understand the distribution of H2S within the basin in order to enhance identification of areas with a high risk of encountering elevated H2S concentrations in order to mitigate against potential negative impacts. Gas composition data from Montney wells is routinely collected by operators for submission to provincial regulators and is publicly available. We have combined data from Alberta (AB) and British Columbia (BC) to create a basin-wide database of Montney H2S concentrations. We then used an iterative quality control and quality assurance process to produce a dataset that best represents gas composition in reservoir fluids. This included: 1) designating gas source formation based on directional surveys using a newly developed basin-wide 3D model incorporating AGS's Montney model of Alberta with a model in BC, which removes errors associated with reported formations; 2) removed injection and disposal wells; 3) assessed wells with the 50 highest H2S concentrations to determine if gas composition data is accurate and reflective of reservoir fluid chemistry; and 4) evaluated spatially isolated extreme values to ensure data accuracy and prevent isolated highs from negatively impacting data interpolation. The resulting dataset was then used to calculate statistics for each x, y location to input into the interpolation process. Three interpolations were constructed based on the associated phase classification: H2S in gas, H2S in liquid (C7+), and aqueous H2S. We used Empirical Bayesian Kriging interpolation to generate H2S distribution maps along with a series of model uncertainty maps. These interpolations illustrate that H2S is heterogeneously distributed across the Montney basin. In general, higher concentrations are found in AB compared with BC with the highest concentrations in the Grande Prairie region along with several other isolated region in the southeastern portion of the basin. The interpolations of H2S associated with different phases show broad similarities. Future mapping research will focus on subdividing intra-Montney sub-members plus under- and overlying strata to further our understanding of the role migration plays in H2S distribution within the Montney basin.
APA, Harvard, Vancouver, ISO, and other styles
10

Weller, Joel I., Derek M. Bickhart, Micha Ron, Eyal Seroussi, George Liu, and George R. Wiggans. Determination of actual polymorphisms responsible for economic trait variation in dairy cattle. United States Department of Agriculture, January 2015. http://dx.doi.org/10.32747/2015.7600017.bard.

Full text
Abstract:
The project’s general objectives were to determine specific polymorphisms at the DNA level responsible for observed quantitative trait loci (QTLs) and to estimate their effects, frequencies, and selection potential in the Holstein dairy cattle breed. The specific objectives were to (1) localize the causative polymorphisms to small chromosomal segments based on analysis of 52 U.S. Holstein bulls each with at least 100 sons with high-reliability genetic evaluations using the a posteriori granddaughter design; (2) sequence the complete genomes of at least 40 of those bulls to 20 coverage; (3) determine causative polymorphisms based on concordance between the bulls’ genotypes for specific polymorphisms and their status for a QTL; (4) validate putative quantitative trait variants by genotyping a sample of Israeli Holstein cows; and (5) perform gene expression analysis using statistical methodologies, including determination of signatures of selection, based on somatic cells of cows that are homozygous for contrasting quantitative trait variants; and (6) analyze genes with putative quantitative trait variants using data mining techniques. Current methods for genomic evaluation are based on population-wide linkage disequilibrium between markers and actual alleles that affect traits of interest. Those methods have approximately doubled the rate of genetic gain for most traits in the U.S. Holstein population. With determination of causative polymorphisms, increasing the accuracy of genomic evaluations should be possible by including those genotypes as fixed effects in the analysis models. Determination of causative polymorphisms should also yield useful information on gene function and genetic architecture of complex traits. Concordance between QTL genotype as determined by the a posteriori granddaughter design and marker genotype was determined for 30 trait-by-chromosomal segment effects that are segregating in the U.S. Holstein population; a probability of <10²⁰ was used to accept the null hypothesis that no segregating gene within the chromosomal segment was affecting the trait. Genotypes for 83 grandsires and 17,217 sons were determined by either complete sequence or imputation for 3,148,506 polymorphisms across the entire genome. Variant sites were identified from previous studies (such as the 1000 Bull Genomes Project) and from DNA sequencing of bulls unique to this project, which is one of the largest marker variant surveys conducted for the Holstein breed of cattle. Effects for stature on chromosome 11, daughter pregnancy rate on chromosome 18, and protein percentage on chromosome 20 met 3 criteria: (1) complete or nearly complete concordance, (2) nominal significance of the polymorphism effect after correction for all other polymorphisms, and (3) marker coefficient of determination >40% of total multiple-regression coefficient of determination for the 30 polymorphisms with highest concordance. The missense polymorphism Phe279Tyr in GHR at 31,909,478 base pairs on chromosome 20 was confirmed as the causative mutation for fat and protein concentration. For effect on fat percentage, 12 additional missensepolymorphisms on chromosome 14 were found that had nearly complete concordance with the suggested causative polymorphism (missense mutation Ala232Glu in DGAT1). The markers used in routine U.S. genomic evaluations were increased from 60,000 to 80,000 by adding markers for known QTLs and markers detected in BARD and other research projects. Objectives 1 and 2 were completely accomplished, and objective 3 was partially accomplished. Because no new clear-cut causative polymorphisms were discovered, objectives 4 through 6 were not completed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography