Journal articles on the topic 'Marginal structure model'

To see the other types of publications on this topic, follow the link: Marginal structure model.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Marginal structure model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Li, Zun, Yuanpei Zhu, and Yuping Wang. "A Criminisi-DnCNN Model-Based Image Inpainting Method." Mathematical Problems in Engineering 2022 (August 2, 2022): 1–8. http://dx.doi.org/10.1155/2022/9780668.

Full text
Abstract:
Existing image inpainting methods achieve unideal results in dealing with centralized inpainting areas. For this reason, in this study, a Criminisi-DnCNN model-based image inpainting method is proposed. Inspired by the manual inpainting technology, the pointwise mutual information (PMI) algorithm was adopted to obtain the marginal structural map of the images to be repaired. Then, the Criminisi algorithm was used to restore the marginal structure to obtain the complete marginal structure image guided by the superficial linear structure. Finally, the problem of texture inpainting was converted into the counterpart of image denoising through the separation of variables by using the denoising convolutional neural network image denoiser (DnCNN). Compared with the existing inpainting methods, this model has improved the clarity of the marginal structure and reduced the blurring of the area to be repaired.
APA, Harvard, Vancouver, ISO, and other styles
2

McNeish, Daniel, and Jeffrey R. Harring. "Improving convergence in growth mixture models without covariance structure constraints." Statistical Methods in Medical Research 30, no. 4 (January 12, 2021): 994–1012. http://dx.doi.org/10.1177/0962280220981747.

Full text
Abstract:
Growth mixture models are a popular method to uncover heterogeneity in growth trajectories. Harnessing the power of growth mixture models in applications is difficult given the prevalence of nonconvergence when fitting growth mixture models to empirical data. Growth mixture models are rooted in the random effect tradition, and nonconvergence often leads researchers to modify their intended model with constraints in the random effect covariance structure to facilitate estimation. While practical, doing so has been shown to adversely affect parameter estimates, class assignment, and class enumeration. Instead, we advocate specifying the models with a marginal approach to prevent the widespread practice of sacrificing class-specific covariance structures to appease nonconvergence. A simulation is provided to show the importance of modeling class-specific covariance structures and builds off existing literature showing that applying constraints to the covariance leads to poor performance. These results suggest that retaining class-specific covariance structures should be a top priority and that marginal models like covariance pattern growth mixture models that model the covariance structure without random effects are well-suited for such a purpose, particularly with modest sample sizes and attrition commonly found in applications. An application to PTSD data with such characteristics is provided to demonstrate (a) convergence difficulties with random effect models, (b) how covariance structure constraints improve convergence but to the detriment of performance, and (c) how covariance pattern growth mixture models may provide a path forward that improves convergence without forfeiting class-specific covariance structures.
APA, Harvard, Vancouver, ISO, and other styles
3

Ando, Shuji. "A Bivariate Index for Visually Measuring Marginal Inhomogeneity in Square Tables." International Journal of Statistics and Probability 8, no. 5 (August 15, 2019): 58. http://dx.doi.org/10.5539/ijsp.v8n5p58.

Full text
Abstract:
For square tables, the marginal homogeneity model which has a structure that the row marginal distribution is equal to the column marginal distribution was proposed. Thereafter, various extended models of marginal homogeneity have been proposed, these models can be classified into two types marginal inhomogeneity. On the other hand, various indexes which measure the degree of deviation from marginal homogeneity have been proposed. However these indexes cannot concurrently define degrees of deviation from marginal homogeneity with respect to two types marginal inhomogeneity. This paper proposes a bivariate index that can concurrently define degrees of deviation from those. The proposed bivariate index would also be utility for visually comparing degrees of deviation from marginal homogeneity in several tables using confidence regions.
APA, Harvard, Vancouver, ISO, and other styles
4

Kertel, Maximilian, and Markus Pauly. "Estimating Gaussian Copulas with Missing Data with and without Expert Knowledge." Entropy 24, no. 12 (December 19, 2022): 1849. http://dx.doi.org/10.3390/e24121849.

Full text
Abstract:
In this work, we present a rigorous application of the Expectation Maximization algorithm to determine the marginal distributions and the dependence structure in a Gaussian copula model with missing data. We further show how to circumvent a priori assumptions on the marginals with semiparametric modeling. Further, we outline how expert knowledge on the marginals and the dependency structure can be included. A simulation study shows that the distribution learned through this algorithm is closer to the true distribution than that obtained with existing methods and that the incorporation of domain knowledge provides benefits.
APA, Harvard, Vancouver, ISO, and other styles
5

Boháčik, Ján. "Financial shocks and their effects on velocity of money in agent-based model." Review of Economic Perspectives 22, no. 4 (December 1, 2022): 241–66. http://dx.doi.org/10.2478/revecp-2022-0011.

Full text
Abstract:
Abstract The interaction of debt and economic performance has been getting more attention over the last few years. However, models making provision for debt are still outnumbered by models completely ignoring it. This paper is the first one to analyze the relationship between household debt (in the form of bank loans) and economic performance (in terms of aggregate income) considering both the impact of wealth and income distribution, and the impact of the MPC distribution under various financial shocks. The outcomes of the model are velocities calculated as ratios of aggregate income to aggregate debt. The paper demonstrates how financial shocks affect the income velocity of money under different distributions of wealth/income and marginal propensity to consume across the population. For this purpose, an original agent-based simulation model with a limited loan supply was designed. Proposed model shocks are shocks to loan demand, loan supply, marginal propensity to consume, macro-prudential regulatory ratios, real estate capital gains, repayment ratios, shocks to the structure of loans provided and to the structure of real estate property transactions. It is shown that the more equal the distributions of wealth/income and of the marginal propensity to consume, the higher is the income velocity of money. From financial shocks, the marginal propensity to consume shock and the shock to the structure of new real estate property purchases have the largest impact on velocity. The shock to regulatory ratios has generally the lowest magnitude.
APA, Harvard, Vancouver, ISO, and other styles
6

Yao, P. "Causal inference: Cognitive functioning and depressive symptoms by longitudinal marginal structure model." Value in Health 17, no. 3 (May 2014): A183—A184. http://dx.doi.org/10.1016/j.jval.2014.03.1072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wiese, Richard. "A Two-Level Approach to Morphological Structure." Journal of Germanic Linguistics 20, no. 3 (September 2008): 243–74. http://dx.doi.org/10.1017/s147054270800010x.

Full text
Abstract:
In morphological theory, various models have been developed with respect to the appropriate levels of abstraction for stating morphological generalizations. This paper addresses a class of seemingly marginal and/or problematic phenomena in morphology and proposes that morphological descriptions regularly refer to two distinct levels of description. One is the level of “morphosyntax,” and one is the level of “morphophonology.” Furthermore, morphology is considered to be marginal if and only if the degree of isomorphy between representations on these two levels is reduced. This basic proposal is illustrated and tested with several central phenomena of morphology found in German: synthetic compounds, conversion, empty morphs, and trun-cation. The analysis proposed here argues for the necessity of a two-level model of morphology as an approach in which both abstract morphosyntax as well as more concrete morphophonology have a place.*
APA, Harvard, Vancouver, ISO, and other styles
8

Niu, Yi, Xiaoguang Wang, Hui Cao, and Yingwei Peng. "Variable selection via penalized generalized estimating equations for a marginal survival model." Statistical Methods in Medical Research 29, no. 9 (January 29, 2020): 2493–506. http://dx.doi.org/10.1177/0962280220901728.

Full text
Abstract:
Clustered and multivariate survival times, such as times to recurrent events, commonly arise in biomedical and health research, and marginal survival models are often used to model such data. When a large number of predictors are available, variable selection is always an important issue when modeling such data with a survival model. We consider a Cox’s proportional hazards model for a marginal survival model. Under the sparsity assumption, we propose a penalized generalized estimating equation approach to select important variables and to estimate regression coefficients simultaneously in the marginal model. The proposed method explicitly models the correlation structure within clusters or correlated variables by using a prespecified working correlation matrix. The asymptotic properties of the estimators from the penalized generalized estimating equations are established and the number of candidate covariates is allowed to increase in the same order as the number of clusters does. We evaluate the performance of the proposed method through a simulation study and analyze two real datasets for the application.
APA, Harvard, Vancouver, ISO, and other styles
9

Koutoumanou, Eirini, Angie Wade, and Mario Cortina-Borja. "Local dependence in bivariate copulae with Beta marginals." Revista Colombiana de Estadística 40, no. 2 (July 1, 2017): 281–96. http://dx.doi.org/10.15446/rce.v40n2.59404.

Full text
Abstract:
The local dependence function (LDF) describes changes in the correlation structure of continuous bivariate random variables along their range. Bivariate density functions with Beta marginals can be used to model jointly a wide variety of data with bounded outcomes in the (0,1) range, e.g. proportions. In this paper we obtain expressions for the LDF of bivariate densities constructed using three different copula models (Frank, Gumbel and Joe) with Beta marginal distributions, present examples for each, and discuss an application of these models to analyse data collected in a study of marks obtained on a statistics exam by postgraduate students.
APA, Harvard, Vancouver, ISO, and other styles
10

SMIRNOV, FEODOR A. "A NEW SET OF EXACT FORM FACTORS." International Journal of Modern Physics A 09, no. 29 (November 20, 1994): 5121–43. http://dx.doi.org/10.1142/s0217751x94002077.

Full text
Abstract:
We present form factors for a wide range of integrable models which include marginal perturbations of the SU(2) WZNZ model for arbitrary central charge and the principal chiral field model. The interesting structure of these form factors is discussed.
APA, Harvard, Vancouver, ISO, and other styles
11

陈, 宪保. "Basin Marginal Sequence Structure Model Study in the Qikou Sag of Dagang Oilfield." Advances in Geosciences 04, no. 06 (2014): 383–88. http://dx.doi.org/10.12677/ag.2014.46045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Enis, Charles R., and Leroy F. Christ. "Implications of Phase-Outs on Individual Marginal Tax Rates." Journal of the American Taxation Association 21, no. 1 (March 1, 1999): 45–72. http://dx.doi.org/10.2308/jata.1999.21.1.45.

Full text
Abstract:
The goal of this paper is to show the behavior of effective marginal tax rates relative to statutory marginal tax rates within the rate structure of the present federal income tax regime. Understanding the behavior of effective marginal rates is important as these rates are a significant component of tax planning and decision making. Statutory marginal tax rates are explicitly stated in published rate schedules. Various deductions, exemptions and credits involved in determining the tax liability are phased out as gross income increases. These restrictions result in effective marginal tax rates that can exceed respective statutory rates. Substantial divergence between effective and statutory rates can occur when multiple phase-out provisions overlap and interact. This study develops an algebraic model using tax return information that converts statutory to effective marginal rates. Policy implications concerning simplifying the effective rate structure are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
13

Gianinetti, Alberto. "Basic Features of the Analysis of Germination Data with Generalized Linear Mixed Models." Data 5, no. 1 (January 8, 2020): 6. http://dx.doi.org/10.3390/data5010006.

Full text
Abstract:
Germination data are discrete and binomial. Although analysis of variance (ANOVA) has long been used for the statistical analysis of these data, generalized linear mixed models (GzLMMs) provide a more consistent theoretical framework. GzLMMs are suitable for final germination percentages (FGP) as well as longitudinal studies of germination time-courses. Germination indices (i.e., single-value parameters summarizing the results of a germination assay by combining the level and rapidity of germination) and other data with a Gaussian error distribution can be analyzed too. There are, however, different kinds of GzLMMs: Conditional (i.e., random effects are modeled as deviations from the general intercept with a specific covariance structure), marginal (i.e., random effects are modeled solely as a variance/covariance structure of the error terms), and quasi-marginal (some random effects are modeled as deviations from the intercept and some are modeled as a covariance structure of the error terms) models can be applied to the same data. It is shown that: (a) For germination data, conditional, marginal, and quasi-marginal GzLMMs tend to converge to a similar inference; (b) conditional models are the first choice for FGP; (c) marginal or quasi-marginal models are more suited for longitudinal studies, although conditional models lead to a congruent inference; (d) in general, common random factors are better dealt with as random intercepts, whereas serial correlation is easier to model in terms of the covariance structure of the error terms; (e) germination indices are not binomial and can be easier to analyze with a marginal model; (f) in boundary conditions (when some means approach 0% or 100%), conditional models with an integral approximation of true likelihood are more appropriate; in non-boundary conditions, (g) germination data can be fitted with default pseudo-likelihood estimation techniques, on the basis of the SAS-based code templates provided here; (h) GzLMMs are remarkably good for the analysis of germination data except if some means are 0% or 100%. In this case, alternative statistical approaches may be used, such as survival analysis or linear mixed models (LMMs) with transformed data, unless an ad hoc data adjustment in estimates of limit means is considered, either experimentally or computationally. This review is intended as a basic tutorial for the application of GzLMMs, and is, therefore, of interest primarily to researchers in the agricultural sciences.
APA, Harvard, Vancouver, ISO, and other styles
14

Caracostea Objelean, Adriana, Anca Labunet, Laura Silaghi-Dumitrescu, Marioara Moldovan, Sorina Sava, and Mindra Eugenia Badea. "In Vitro Chewing Simulation Model Influence on the Adhesive-Tooth Structure Interface." Key Engineering Materials 695 (May 2016): 77–82. http://dx.doi.org/10.4028/www.scientific.net/kem.695.77.

Full text
Abstract:
Loss of hard dental tissues of the posterior teeth during caries removal represents an important issue for conservative dentistry. The use of direct dental biomaterials in this case have to satisfy the requirements of the restored area. The studies have shown higher values of chewing forces at the molar teeth level (20-120N) compared to other teeth [1,2]. Thus, for a long-term clinical success the dental biomaterials have to assure a good marginal sealing and a high resistance to thermal and mechanical stresses developed in the lateral zones of the oral cavity [3]. The aim of this study was carried out to assess the effect of an in vitro chewing simulation model on the adhesively-bonded resin composite restorations. Standardized extended proximal cavities were prepared and restored in forty five sound human third molars. Three in vitro aging methods: a chewing simulation model (mechanical cycling and periodontal ligament simulation) (MC+PDL), thermocycling (TC) and distilled water storage (WS), were used to test the marginal sealing behavior of two adhesive techniques (an adhesive-free flowable resin composite and a self-etch all-in-one adhesive system). A weight-controlled dual-axis chewing device (CS-4.2, SD Mechatronik, Germany) was used for mechanical testing (MC) of the samples. Significantly higher marginal leakage values were observed for the chewing simulation model (MC) compared to TC and WS groups (p<0.05). No statistical correlations were found with regard to aging methods for the tracer’s infiltration of the two adhesive techniques. The dual-axis chewing simulator (CS-4.2) due to its facile mechanical adjustment system may be used for different other in vitro aging models or simulated clinical settings.
APA, Harvard, Vancouver, ISO, and other styles
15

Zurita-Gotor, Pablo, and Geoffrey K. Vallis. "Dynamics of Midlatitude Tropopause Height in an Idealized Model." Journal of the Atmospheric Sciences 68, no. 4 (April 1, 2011): 823–38. http://dx.doi.org/10.1175/2010jas3631.1.

Full text
Abstract:
Abstract This paper investigates the factors that determine the equilibrium state, and in particular the height and structure of the tropopause, in an idealized primitive equation model forced by Newtonian cooling in which the eddies can determine their own depth. Previous work has suggested that the midlatitude tropopause height may be understood as the intersection between a radiative and a dynamical constraint. The dynamical constraint relates to the lateral transfer of energy, which in midlatitudes is largely effected by baroclinic eddies, and its representation in terms of mean-flow properties. Various theories have been proposed and investigated for the representation of the eddy transport in terms of the mean flow, including a number of diffusive closures and the notion that the flow evolves to a state marginally supercritical to baroclinic instability. The radiative constraint expresses conservation of energy and so must be satisfied, although it need not necessarily be useful in providing a tight constraint on tropopause height. This paper explores whether and how the marginal criticality and radiative constraints work together to produce an equilibrated flow and a tropopause that is internal to the fluid. The paper investigates whether these two constraints are consistent with simulated variations in the tropopause height and in the mean state when the external parameters of an idealized primitive equation model are changed. It is found that when the vertical redistribution of heat is important, the radiative constraint tightly constrains the tropopause height and prevents an adjustment to marginal criticality. In contrast, when the stratification adjustment is small, the radiative constraint is only loosely satisfied and there is a tendency for the flow to adjust to marginal criticality. In those cases an alternative dynamical constraint would be needed in order to close the problem and determine the eddy transport and tropopause height in terms of forcing and mean flow.
APA, Harvard, Vancouver, ISO, and other styles
16

Cannon, Alex J. "Multivariate Bias Correction of Climate Model Output: Matching Marginal Distributions and Intervariable Dependence Structure." Journal of Climate 29, no. 19 (September 19, 2016): 7045–64. http://dx.doi.org/10.1175/jcli-d-15-0679.1.

Full text
Abstract:
Abstract Univariate bias correction algorithms, such as quantile mapping, are used to address systematic biases in climate model output. Intervariable dependence structure (e.g., between different quantities like temperature and precipitation or between sites) is typically ignored, which can have an impact on subsequent calculations that depend on multiple climate variables. A novel multivariate bias correction (MBC) algorithm is introduced as a multidimensional analog of univariate quantile mapping. Two variants are presented. MBCp and MBCr respectively correct Pearson correlation and Spearman rank correlation dependence structure, with marginal distributions in both constrained to match observed distributions via quantile mapping. MBC is demonstrated on two case studies: 1) bivariate bias correction of monthly temperature and precipitation output from a large ensemble of climate models and 2) multivariate correction of vertical humidity and wind profiles, including subsequent calculation of vertically integrated water vapor transport and detection of atmospheric rivers. The energy distance is recommended as an omnibus measure of performance for model selection. As expected, substantial improvements in performance relative to quantile mapping are found in each case. For reference, characteristics of the MBC algorithm are compared against existing bivariate and multivariate bias correction techniques. MBC performs competitively and fills a role as a flexible, general purpose multivariate bias correction algorithm.
APA, Harvard, Vancouver, ISO, and other styles
17

MAI, JAN-FREDERIK, and MATTHIAS SCHERER. "A TRACTABLE MULTIVARIATE DEFAULT MODEL BASED ON A STOCHASTIC TIME-CHANGE." International Journal of Theoretical and Applied Finance 12, no. 02 (March 2009): 227–49. http://dx.doi.org/10.1142/s0219024909005208.

Full text
Abstract:
A stochastic time-change is applied to introduce dependence to a portfolio of credit-risky assets whose default times are modeled as random variables with arbitrary distribution. The dependence structure of the vector of default times is completely separated from its marginal default probabilities, making the model analytically tractable. This separation is achieved by restricting the time-change to suitable Lévy subordinators which preserve the marginal distributions. Jump times of the Lévy subordinator are interpreted as times of excess default clustering. Relevant for practical implementations is that the parameters of the time-change allow for an intuitive economical explanation and can be calibrated independently of the marginal default probabilities. On a theoretical level, a so-called time normalization allows to compute the resulting copula of the default times. Moreover, the exact portfolio-loss distribution and an approximation for large portfolios under a homogeneous portfolio assumption are derived. Given these results, the pricing of complex portfolio derivatives is possible in closed-form. Three different implementations of the model are proposed, including a compound Poisson subordinator, a Gamma subordinator, and an Inverse Gaussian subordinator. Using two parameters to adjust the dependence structure in each case, the model is capable of capturing the full range of dependence patterns from independence to complete comonotonicity. A simultaneous calibration to portfolio-CDS spreads and CDO tranche spreads is carried out to demonstrate the model's applicability.
APA, Harvard, Vancouver, ISO, and other styles
18

Yang, J., H. F. Lam, and J. Hu. "Ambient Vibration Test, Modal Identification and Structural Model Updating Following Bayesian Framework." International Journal of Structural Stability and Dynamics 15, no. 07 (August 31, 2015): 1540024. http://dx.doi.org/10.1142/s0219455415400246.

Full text
Abstract:
Structural health monitoring (SHM) of civil engineering structures based on vibration data includes three main components: ambient vibration test, modal identification and model updating. This paper discussed these three components in detail and proposes a general framework of SHM for practical application. First, a fast Bayesian modal identification method based on Fast Fourier Transform (FFT) is introduced for efficiently extracting modal parameters together with the corresponding uncertainties from ambient vibration data. A recently developed Bayesian model updating method using Markov chain Monte Carlo simulation (MCMCS) is then discussed. To illustrate the performance of the proposed modal identification and model updating methods, a scale-down transmission tower is investigated. Ambient vibration test is conducted on the target structure to obtain modal parameters. By using the measured modal parameters, model updating is carried out. The MCMC-based Bayesian model updating method can efficiently evaluate the posterior marginal PDFs of the uncertain parameters without calculating high-dimension numerical integration, which provides posterior uncertainties for the target systems.
APA, Harvard, Vancouver, ISO, and other styles
19

Johnston, Mark. "Extension of the Capital Asset Pricing Model to Non-normal Dependence Structures." ASTIN Bulletin 37, no. 01 (May 2007): 35–52. http://dx.doi.org/10.2143/ast.37.1.2020797.

Full text
Abstract:
The Capital Asset Pricing Model arises in an economy where agents have exponential utility functions and aggregate consumption is normally distributed, and gives the prices of assets with payoffs which are jointly normal with consumption. Such assets have normal marginal distributions and have dependence with consumption characterised by a normal copula. Wang has derived a transform which extends the CAPM by allowing pricing of assets in such an economy which have non-normal marginal distributions but still are normal-copula with consumption.Here we set out the stochastic discount factors corresponding to this version of the CAPM and to Wang’s transform, and show how to calculate stochastic discount factors and hence asset prices for assets whose dependence with consumption is non-normal. We show that the impact of dependency structure on asset prices is significant.
APA, Harvard, Vancouver, ISO, and other styles
20

Johnston, Mark. "Extension of the Capital Asset Pricing Model to Non-normal Dependence Structures." ASTIN Bulletin 37, no. 1 (May 2007): 35–52. http://dx.doi.org/10.1017/s0515036100014720.

Full text
Abstract:
The Capital Asset Pricing Model arises in an economy where agents have exponential utility functions and aggregate consumption is normally distributed, and gives the prices of assets with payoffs which are jointly normal with consumption. Such assets have normal marginal distributions and have dependence with consumption characterised by a normal copula. Wang has derived a transform which extends the CAPM by allowing pricing of assets in such an economy which have non-normal marginal distributions but still are normal-copula with consumption.Here we set out the stochastic discount factors corresponding to this version of the CAPM and to Wang’s transform, and show how to calculate stochastic discount factors and hence asset prices for assets whose dependence with consumption is non-normal. We show that the impact of dependency structure on asset prices is significant.
APA, Harvard, Vancouver, ISO, and other styles
21

Petersen, Maya, Joshua Schwab, Susan Gruber, Nello Blaser, Michael Schomaker, and Mark van der Laan. "Targeted Maximum Likelihood Estimation for Dynamic and Static Longitudinal Marginal Structural Working Models." Journal of Causal Inference 2, no. 2 (September 1, 2014): 147–85. http://dx.doi.org/10.1515/jci-2013-0007.

Full text
Abstract:
AbstractThis paper describes a targeted maximum likelihood estimator (TMLE) for the parameters of longitudinal static and dynamic marginal structural models. We consider a longitudinal data structure consisting of baseline covariates, time-dependent intervention nodes, intermediate time-dependent covariates, and a possibly time-dependent outcome. The intervention nodes at each time point can include a binary treatment as well as a right-censoring indicator. Given a class of dynamic or static interventions, a marginal structural model is used to model the mean of the intervention-specific counterfactual outcome as a function of the intervention, time point, and possibly a subset of baseline covariates. Because the true shape of this function is rarely known, the marginal structural model is used as a working model. The causal quantity of interest is defined as the projection of the true function onto this working model. Iterated conditional expectation double robust estimators for marginal structural model parameters were previously proposed by Robins (2000, 2002) and Bang and Robins (2005). Here we build on this work and present a pooled TMLE for the parameters of marginal structural working models. We compare this pooled estimator to a stratified TMLE (Schnitzer et al. 2014) that is based on estimating the intervention-specific mean separately for each intervention of interest. The performance of the pooled TMLE is compared to the performance of the stratified TMLE and the performance of inverse probability weighted (IPW) estimators using simulations. Concepts are illustrated using an example in which the aim is to estimate the causal effect of delayed switch following immunological failure of first line antiretroviral therapy among HIV-infected patients. Data from the International Epidemiological Databases to Evaluate AIDS, Southern Africa are analyzed to investigate this question using both TML and IPW estimators. Our results demonstrate practical advantages of the pooled TMLE over an IPW estimator for working marginal structural models for survival, as well as cases in which the pooled TMLE is superior to its stratified counterpart.
APA, Harvard, Vancouver, ISO, and other styles
22

Arnold, Barry C., and Bangalore G. Manjunath. "Pseudo-Poisson Distributions with Concomitant Variables." Mathematical and Computational Applications 28, no. 1 (January 12, 2023): 11. http://dx.doi.org/10.3390/mca28010011.

Full text
Abstract:
It has been argued in Arnold and Manjunath (2021) that the bivariate pseudo-Poisson distribution will be the model of choice for bivariate data with one equidispersed marginal and the other marginal over-dispersed. This is due to its simple structure, straightforward parameter estimation and fast computation. In the current note, we introduce the effects of concomitant variables on the bivariate pseudo-Poisson parameters and explore the distributional and inferential aspects of the augmented models. We also include a small simulation study and an example of application to real-life data.
APA, Harvard, Vancouver, ISO, and other styles
23

Fragassa, Cristiano, Marko Topalovic, Ana Pavlovic, and Snezana Vulovic. "Dealing with the Effect of Air in Fluid Structure Interaction by Coupled SPH-FEM Methods." Materials 12, no. 7 (April 10, 2019): 1162. http://dx.doi.org/10.3390/ma12071162.

Full text
Abstract:
Smoothed particle hydrodynamics (SPH) and the finite element method (FEM) are often combined with the scope to model the interaction between structures and the surrounding fluids (FSI). There is the case, for instance, of aircrafts crashing on water or speedboats slamming into waves. Due to the high computational complexity, the influence of air is often neglected, limiting the analysis to the interaction between structure and water. On the contrary, this work aims to specifically investigate the effect of air when merged inside the fluid–structure interaction (FSI) computational models. Measures from experiments were used as a basis to validate estimations comparing results from models that include or exclude the presence of air. Outcomes generally showed a great correlation between simulation and experiments, with marginal differences in terms of accelerations, especially during the first phase of impact and considering the presence of air in the model.
APA, Harvard, Vancouver, ISO, and other styles
24

Fu, Liya, Zhuoran Yang, Mingtao Zhao, and Yan Zhou. "Efficient parameter estimation for multivariate accelerated failure time model via the quadratic inference functions method." Random Matrices: Theory and Applications 08, no. 04 (October 2019): 1950013. http://dx.doi.org/10.1142/s2010326319500138.

Full text
Abstract:
A popular approach, generalized estimating equations (GEE), has been applied to the multivariate accelerated failure time (AFT) model of the clustered and censored data. However, this method needs to estimate the correlation parameters and calculate the inverse of the correlation matrix. Meanwhile, the efficiency of the parameter estimators is low when the correlation structure is misspecified and/or the marginal distribution is heavy-tailed. This paper proposes using the quadratic inference functions (QIF) with a mixture correlation structure to estimate the coefficients in the multivariate AFT model, which can avoid estimating the correlation parameters and computing the inverse matrix of the correlation matrix. Moreover, the estimator derived from the QIF is consistent and asymptotically normal. Simulation studies indicate that the proposed method outperforms the method based on GEE when the marginal distribution has a heavy tail. Finally, the proposed method is used to analyze a real dataset for illustration.
APA, Harvard, Vancouver, ISO, and other styles
25

Chen, Xu Xu, and Hui Jian Han. "Margin Refine of Candide-3 Model for Facial Deformation." Advanced Materials Research 926-930 (May 2014): 3073–78. http://dx.doi.org/10.4028/www.scientific.net/amr.926-930.3073.

Full text
Abstract:
Because of the structure of the CANDIDE model is simple and the quantity of its marginal gridlines is small, an obvious marginal trace would appear while the face changed. In view of this, this paper proposed a method which refined the facial outline gridlines to improve the effect of facial transformation. Firstly, the feature point of the pupil is set to enhance the location veracity of the model, then 44 key feature points are chosen to represent the information of the basic facial features. The face matching is realized by ASM algorithm, then fit and refine the extracted contour line personalized to enhance the expressive force of the model of CANDIDE-3. Experimental results show that the result of facial transformation has been optimized. The effect of the transformation is optimized on the premise of the computation speed is guaranteed, and the result looks more naturally.
APA, Harvard, Vancouver, ISO, and other styles
26

Fiorentin, Luan Demarco, Wagner Hugo Bonat, Allan Libanio Pelissari, Sebastião do Amaral Machado, and Saulo Jorge Téo. "Covariance Generalized Linear Models: An Approach for Quantifying Uncertainty in Tree Stem Taper Modeling." Forest Science 67, no. 6 (November 5, 2021): 642–58. http://dx.doi.org/10.1093/forsci/fxab037.

Full text
Abstract:
Abstract A natural dependence among diameters measured within-tree is expected in taper data due to the hierarchical structure. The aim of this paper was to introduce the covariance generalized linear model (CGLM) framework in the context of forest biometrics for Pinus taeda stem form modeling. The CGLMs are based on marginal specification, which requires a definition of the mean and covariance components. The tree stem mean profiles were modeled by a nonlinear segmented model. The covariance matrix was built considering four strategies of linear combinations of known matrices, which expressed the variance or correlations among observations. The first strategy modeled only the variance of the diameters over the stem as a function of covariates, the second modeled correlation among observations, the third was defined based on a random walk model, the fourth was based on a structure similar to a mixed-effect model with a marginal specification, and the fourth was a traditional mixed-effect model. Mean squared error and bias showed that the approaches were similar for describing the mean profile for fitting and validation dataset. However, uncertainties expressed by confidence intervals of the relative diameters were significant and related to the matrix covariance structures of the CGLMs.
APA, Harvard, Vancouver, ISO, and other styles
27

He, Chun Hua, Xiao Mei Zhang, and Bing Jun Li. "Empirical Analysis of Change of Consumption Structure between Henan Urban and Rural Residents." Advanced Materials Research 433-440 (January 2012): 5092–96. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.5092.

Full text
Abstract:
Based on sectional data regarding income and expenditure of Henan urban residents from 2006 to 2009, this paper applies ELES model to have a quantitative analysis of consumption expenditure structure of Henan urban and rural residents, involving three aspects including marginal propensity to consume, demand income elasticity and price elasticity. The results show that the marginal propensity to consume of rural residents is relatively low, the proportion of basic consumption expenditure in foodstuff and dwelling is large, giving priority to living type consumption, while some development potential for urban residents exists in the consumption of transportation, correspondence, culture and education, recreation and medical care.
APA, Harvard, Vancouver, ISO, and other styles
28

Shi, Wei, and Jun Xia. "Combined risk assessment of nonstationary monthly water quality based on Markov chain and time-varying copula." Water Science and Technology 75, no. 3 (November 30, 2016): 693–704. http://dx.doi.org/10.2166/wst.2016.553.

Full text
Abstract:
Water quality risk management is a global hot research linkage with the sustainable water resource development. Ammonium nitrogen (NH3-N) and permanganate index (CODMn) as the focus indicators in Huai River Basin, are selected to reveal their joint transition laws based on Markov theory. The time-varying moments model with either time or land cover index as explanatory variables is applied to build the time-varying marginal distributions of water quality time series. Time-varying copula model, which takes the non-stationarity in the marginal distribution and/or the time variation in dependence structure between water quality series into consideration, is constructed to describe a bivariate frequency analysis for NH3-N and CODMn series at the same monitoring gauge. The larger first-order Markov joint transition probability indicates water quality state Class Vw, Class IV and Class III will occur easily in the water body of Bengbu Sluice. Both marginal distribution and copula models are nonstationary, and the explanatory variable time yields better performance than land cover index in describing the non-stationarities in the marginal distributions. In modelling the dependence structure changes, time-varying copula has a better fitting performance than the copula with the constant or the time-trend dependence parameter. The largest synchronous encounter risk probability of NH3-N and CODMn simultaneously reaching Class V is 50.61%, while the asynchronous encounter risk probability is largest when NH3-N and CODMn is inferior to class V and class IV water quality standards, respectively.
APA, Harvard, Vancouver, ISO, and other styles
29

Luo, Jun, Jiepeng Wang, Yongle Zhao, and Tingqiang Chen. "Scare Behavior Diffusion Model of Health Food Safety Based on Complex Network." Complexity 2018 (November 1, 2018): 1–14. http://dx.doi.org/10.1155/2018/5902105.

Full text
Abstract:
This study constructs a heterogeneous model of health food safety scare behavior diffusion through a complex network model by considering health food safety information transparency and health food consumers’ ability to process information. This study first analyzes the effects of network structure and heterogeneity of health food consumers on the health food safety scare behavior diffusion using network stochastic dominance theory. Subsequently, a computer mathematical simulation is performed to explore the characteristics and laws of the evolution of health food safety scare behavior diffusion. The following three major conclusions can be drawn from the results. First, increases in the health food safety information transparency, the health food consumers’ ability to process information, and the recovery rate of health food consumers can increase the threshold of the rate of health food safety scare behavior diffusion. The health food safety information transparency and the recovery rate of health food consumers show marginal incremental rising characteristics in relation to the rate of health food safety scare behavior diffusion, whereas the health food consumers’ ability to process information shows a marginal diminishing rising characteristic in relation to the rate of health food safety scare behavior diffusion. Second, increases in the health food safety information transparency, the health food consumers’ ability to process information, and the recovery rate of health food consumers can decrease the scale of the health food safety scare behavior diffusion. The health food safety information transparency shows a marginal diminishing decreasing characteristic in relation to the scale of the health food safety scare behavior diffusion, whereas the health food consumers’ ability to process information and the recovery rate of the health food consumers show marginal incremental decreasing characteristics in relation to the scale of the health food safety scare behavior diffusion. Finally, the network structure of health food consumers significantly affects the health food safety scare behavior diffusion. A high heterogeneity of the health food consumer network indicates a high threshold of the rate of health food safety scare behavior diffusion and low diffusion scale.
APA, Harvard, Vancouver, ISO, and other styles
30

MARFÈ, ROBERTO. "A MULTIVARIATE PURE-JUMP MODEL WITH MULTI-FACTORIAL DEPENDENCE STRUCTURE." International Journal of Theoretical and Applied Finance 15, no. 04 (June 2012): 1250028. http://dx.doi.org/10.1142/s0219024912500288.

Full text
Abstract:
In this work we propose a new approach to build multivariate pure jump processes. We introduce linear and nonlinear dependence, without restrictions on marginal properties, by imposing a multi-factorial structure separately on both positive and negative jumps. Such a new approach provides higher flexibility in calibrating nonlinear dependence than in other comparable Lévy models in the literature. Using the notion of multivariate subordinator, this modeling approach can be applied to the class of univariate Lévy processes which can be written as the difference of two subordinators. A common example in the financial literature is the variance gamma process, which we extend to the multivariate (multi-factorial) case. The model is tractable and a straightforward multivariate simulation procedure is available. An empirical analysis documents an accurate multivariate fit of stock index returns in terms of both linear and nonlinear dependence. An example of multi-asset option pricing emphasizes the importance of the proposed multivariate approach.
APA, Harvard, Vancouver, ISO, and other styles
31

Chen, C. S., and Thomas H. Savits. "Optimal Age and Block Replacement for a General Maintenance Model." Probability in the Engineering and Informational Sciences 6, no. 1 (January 1992): 81–98. http://dx.doi.org/10.1017/s0269964800002333.

Full text
Abstract:
We continue the study of our general cost structure for a maintained system. Here we focus on the optimization questions for an age or block policy. The notion of a marginal cost function is rigorously formulated and its utility investigated. Various applications are considered, including a new model in which minimal repairs are performed as long as the total accumulated repair costs do not exceed a fixed amount.
APA, Harvard, Vancouver, ISO, and other styles
32

Jordaan, Yolande, and Nicholaas J. Schoeman. "Measuring the impact of marginal tax rate reform on the revenue base of South Africa using a microsimulation tax model." South African Journal of Economic and Management Sciences 18, no. 3 (August 25, 2015): 380–94. http://dx.doi.org/10.4102/sajems.v18i3.795.

Full text
Abstract:
This paper is primarily concerned with the revenue and tax efficiency effects of adjustments to marginal tax rates on individual income as an instrument of possible tax reform. The hypothesis is that changes to marginal rates affect not only the revenue base, but also tax efficiency and the optimum level of taxes that supports economic growth. Using an optimal revenue-maximising rate (based on Laffer analysis), the elasticity of taxable income is derived with respect to marginal tax rates for each taxable-income category. These elasticities are then used to quantify the impact of changes in marginal rates on the revenue base and tax efficiency using a microsimulation (MS) tax model. In this first paper on the research results, much attention is paid to the structure of the model and the way in which the database has been compiled. The model allows for the dissemination of individual taxpayers by income groups, gender, educational level, age group, etc. Simulations include a scenario with higher marginal rates which is also more progressive (as in the 1998/1999 fiscal year), in which case tax revenue increases but the increase is overshadowed by a more than proportional decrease in tax efficiency as measured by its deadweight loss. On the other hand, a lowering of marginal rates (to bring South Africa’s marginal rates more in line with those of its peers) improves tax efficiency but also results in a substantial revenue loss. The estimated optimal individual tax to gross domestic product (GDP) ratio in order to maximise economic growth (6.7 per cent) shows a strong response to changes in marginal rates, and the results from this research indicate that a lowering of marginal rates would also move the actual ratio closer to its optimum level. Thus, the trade-off between revenue collected and tax efficiency should be carefully monitored when personal income tax reform is being considered.
APA, Harvard, Vancouver, ISO, and other styles
33

Sugahara, Shouta, Itsuki Aomi, and Maomi Ueno. "Bayesian Network Model Averaging Classifiers by Subbagging." Entropy 24, no. 5 (May 23, 2022): 743. http://dx.doi.org/10.3390/e24050743.

Full text
Abstract:
When applied to classification problems, Bayesian networks are often used to infer a class variable when given feature variables. Earlier reports have described that the classification accuracy of Bayesian network structures achieved by maximizing the marginal likelihood (ML) is lower than that achieved by maximizing the conditional log likelihood (CLL) of a class variable given the feature variables. Nevertheless, because ML has asymptotic consistency, the performance of Bayesian network structures achieved by maximizing ML is not necessarily worse than that achieved by maximizing CLL for large data. However, the error of learning structures by maximizing the ML becomes much larger for small sample sizes. That large error degrades the classification accuracy. As a method to resolve this shortcoming, model averaging has been proposed to marginalize the class variable posterior over all structures. However, the posterior standard error of each structure in the model averaging becomes large as the sample size becomes small; it subsequently degrades the classification accuracy. The main idea of this study is to improve the classification accuracy using subbagging, which is modified bagging using random sampling without replacement, to reduce the posterior standard error of each structure in model averaging. Moreover, to guarantee asymptotic consistency, we use the K-best method with the ML score. The experimentally obtained results demonstrate that our proposed method provides more accurate classification than earlier BNC methods and the other state-of-the-art ensemble methods do.
APA, Harvard, Vancouver, ISO, and other styles
34

Xu, Jun. "A Two-Sided Market Model of Optimal Price Structure for Instant Messenger." Journal of Applied Mathematics 2013 (2013): 1–10. http://dx.doi.org/10.1155/2013/768168.

Full text
Abstract:
Instant messenger (IM) is one of the most popular Internet applications all over the world. This paper examines the pricing problem of IM based on two-sided market model. IM serves as a two-sided platform, which gets both Internet users and advertisers on board. This paper concludes that IM operator adopts a heavily skewed price structure that favors IM users both under monopolistic case and under horizontal differentiated duopolistic case. When advertising revenue is large enough relatively to marginal cost for serving IM users, IM users can enjoy free service provided by IM operators. The competitive equilibrium of duopolistic case is not necessarily symmetric when advertisers choose singlehoming. Even in the symmetric equilibrium platform would rather deter all advertisers.
APA, Harvard, Vancouver, ISO, and other styles
35

Ruiz-Granados, Beatriz, J. A. Rubiño-Martín, and E. Battaner. "A study of the regular structure of the galactic magnetic field using WMAP5 polarization data at 22 GHz." Proceedings of the International Astronomical Union 4, S259 (November 2008): 573–76. http://dx.doi.org/10.1017/s1743921309031391.

Full text
Abstract:
AbstractWe study the spatial structure of the 3-dimensional large-scale pattern of the Galactic Magnetic Field using the polarization maps obtained by the WMAP satellite at 22 GHz. By using five different models of the large-scale magnetic field of the Milky Way and a model for the cosmic rays distribution, we predict the expected polarized synchrotron emission. Those maps are compared to the observed 22 GHz polarization data using a Maximum Likelihood method. For each model, we obtain the parameter values which better reproduce the data and obtain their marginal probability distribution functions. We find that the model that best reproduces the observed polarization maps is an “axisymmetric” model.
APA, Harvard, Vancouver, ISO, and other styles
36

BHAGWAT, A., and Y. K. GAMBHIR. "EVOLUTION OF SHELL STRUCTURE IN NUCLEI." International Journal of Modern Physics E 20, no. 08 (August 2011): 1663–75. http://dx.doi.org/10.1142/s0218301311019581.

Full text
Abstract:
Systematic investigations of the pairing and two-neutron separation energies which play a crucial role in the evolution of shell structure in nuclei, are carried out within the framework of relativistic mean-field model. The shell closures are found to be robust, as expected, up to the lead region. New shell closures appear in low mass region. In the superheavy region, on the other hand, it is found that the shell closures are not as robust, and they depend on the particular combinations of neutron and proton numbers. Effect of deformation on the shell structure is found to be marginal.
APA, Harvard, Vancouver, ISO, and other styles
37

Han, Dongying, Shimin Wei, Peiming Shi, Ying Zhang, Kai Gao, and Nengyuan Tian. "Damage Identification of a Derrick Steel Structure Based on the HHT Marginal Spectrum Amplitude Curvature Difference." Shock and Vibration 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/1062949.

Full text
Abstract:
For the damage identification of derrick steel structures, traditional methods often require high-order vibration information of structures to identify damage accurately. However, the high-order vibration information of structures is difficult to acquire. Based on the technology of signal feature extraction, only using the low-order vibration information, taking the right front leg as an example, we analyzed the selection of HHT marginal spectrum amplitude and the calculation process of its curvature in practical application, designed the damage conditions of a derrick steel structure, used the index and intrinsic mode function (IMF) instantaneous energy curvature method to perform the damage simulation calculation and comparison, and verified the effect of identifying the damage location in a noisy environment. The results show that the index can accurately determine the location of the damage element and weak damage element and can be used to qualitatively analyze the damage degree of the element; under the impact load, the noise hardly affects the identification of the damage location. Finally, this method was applied to the ZJ70 derrick steel structure laboratory model and compared with the IMF instantaneous energy curvature method. We verified the feasibility of this method in the damage location simulation experiment.
APA, Harvard, Vancouver, ISO, and other styles
38

Scheuerer, Michael, and Thomas M. Hamill. "Generating Calibrated Ensembles of Physically Realistic, High-Resolution Precipitation Forecast Fields Based on GEFS Model Output." Journal of Hydrometeorology 19, no. 10 (October 1, 2018): 1651–70. http://dx.doi.org/10.1175/jhm-d-18-0067.1.

Full text
Abstract:
Abstract Enhancements of multivariate postprocessing approaches are presented that generate statistically calibrated ensembles of high-resolution precipitation forecast fields with physically realistic spatial and temporal structures based on precipitation forecasts from the Global Ensemble Forecast System (GEFS). Calibrated marginal distributions are obtained with a heteroscedastic regression approach using censored, shifted gamma distributions. To generate spatiotemporal forecast fields, a new variant of the recently proposed minimum divergence Schaake shuffle technique, which selects a set of historic dates in such a way that the associated analysis fields have marginal distributions that resemble the calibrated forecast distributions, is proposed. This variant performs univariate postprocessing at the forecast grid scale and disaggregates these coarse-scale precipitation amounts to the analysis grid by deriving a multiplicative adjustment function and using it to modify the historic analysis fields such that they match the calibrated coarse-scale precipitation forecasts. In addition, an extension of the ensemble copula coupling (ECC) technique is proposed. A mapping function is constructed that maps each raw ensemble forecast field to a high-resolution forecast field such that the resulting downscaled ensemble has the prescribed marginal distributions. A case study over an area that covers the Russian River watershed in California is presented, which shows that the forecast fields generated by the two new techniques have a physically realistic spatial structure. Quantitative verification shows that they also represent the distribution of subgrid-scale precipitation amounts better than the forecast fields generated by the standard Schaake shuffle or the ECC-Q reordering approaches.
APA, Harvard, Vancouver, ISO, and other styles
39

Sindi, Alaa Rashad, Said M. Easa, and Abd El Halim Omar Abd El Halim. "Improved multiple discreet-continuous extreme value model." Canadian Journal of Civil Engineering 45, no. 12 (December 2018): 1040–52. http://dx.doi.org/10.1139/cjce-2017-0186.

Full text
Abstract:
One of the advanced methods of modeling activity duration is the multiple discrete-continuous extreme value (MDCEV) model. The translating satiation parameter of the model is intended to capture the constant marginal utility effect, but model testing shows that it does not do so. In this paper, the structure of the translating satiation parameter was modified to incorporate an additional parameter that enables the model to capture this effect. Two datasets from cities in Saudi Arabia and Germany were used to test the applicability of the modified model for any dataset. The results showed that the proposed model structure increased the accuracy of activity duration prediction by up to 74%. The modified model represents an improvement to the conventional MDCEV model in terms of accuracy and flexibility, and as such should be a valuable tool in transportation planning and management.
APA, Harvard, Vancouver, ISO, and other styles
40

Lee, Woo Geun, Jung Seok Kim, and Jae-Yong Lim. "Equivalent core concept for large-scale structural-level applications." Journal of Sandwich Structures & Materials 21, no. 4 (July 18, 2017): 1595–616. http://dx.doi.org/10.1177/1099636217720252.

Full text
Abstract:
This study examined the applicability of the equivalent core concept, which replaces a discrete core with a homogenized solid core representing its elastic properties, on a large-scale structure. To this end, numerical verifications were performed for corrugated core structures at two levels, the specimen level and structural level. Before the verifications, analytical equations were gathered from previous reports to obtain the homogenized elastic properties of corrugated cores. At the specimen-level verifications, the maximum deflections of the corrugated core panel specimens subject to three-point bending were calculated with sandwich beam theory, finite element models with discretely modeled cores and equivalent cores. For the structural-level verifications, the maximum deflection and natural frequency were computed from a discrete finite element model and an equivalent model of a railway car body structure. The results revealed that the equivalent models gave excellent agreement with the theoretical values if the same underlying boundary conditions were used; however, greater discrepancies were observed with the discrete models. In addition, for the structural-level verifications the equivalent core model reasonably approximated the discrete model with marginal accuracy. Therefore, employing the equivalent core concept can be expected to save computational costs in the initial design stage of large-scale structures.
APA, Harvard, Vancouver, ISO, and other styles
41

Latif, Shahid, and Slobodan P. Simonovic. "Trivariate Joint Distribution Modelling of Compound Events Using the Nonparametric D-Vine Copula Developed Based on a Bernstein and Beta Kernel Copula Density Framework." Hydrology 9, no. 12 (December 7, 2022): 221. http://dx.doi.org/10.3390/hydrology9120221.

Full text
Abstract:
Low-lying coastal communities are often threatened by compound flooding (CF), which can be determined through the joint occurrence of storm surges, rainfall and river discharge, either successively or in close succession. The trivariate distribution can demonstrate the risk of the compound phenomenon more realistically, rather than considering each contributing factor independently or in pairwise dependency relations. Recently, the vine copula has been recognized as a highly flexible approach to constructing a higher-dimensional joint density framework. In these, the parametric class copula with parametric univariate marginals is often involved. Its incorporation can lead to a lack of flexibility due to parametric functions that have prior distribution assumptions about their univariate marginal and/or copula joint density. This study introduces the vine copula approach in a nonparametric setting by introducing Bernstein and Beta kernel copula density in establishing trivariate flood dependence. The proposed model was applied to 46 years of flood characteristics collected on the west coast of Canada. The univariate flood marginal distribution was modelled using nonparametric kernel density estimation (KDE). The 2D Bernstein estimator and beta kernel copula estimator were tested independently in capturing pairwise dependencies to establish D-vine structure in a stage-wise nesting approach in three alternative ways, each by permutating the location of the conditioning variable. The best-fitted vine structure was selected using goodness-of-fit (GOF) test statistics. The performance of the nonparametric vine approach was also compared with those of vines constructed with a parametric and semiparametric fitting procedure. Investigation revealed that the D-vine copula constructed using a Bernstein copula with normal KDE marginals performed well nonparametrically in capturing the dependence of the compound events. Finally, the derived nonparametric model was used in the estimation of trivariate joint return periods, and further employed in estimating failure probability statistics.
APA, Harvard, Vancouver, ISO, and other styles
42

Luong, Ngoc Hoang, Han La Poutré, and Peter A. N. Bosman. "Exploiting Linkage Information and Problem-Specific Knowledge in Evolutionary Distribution Network Expansion Planning." Evolutionary Computation 26, no. 3 (September 2018): 471–505. http://dx.doi.org/10.1162/evco_a_00209.

Full text
Abstract:
This article tackles the Distribution Network Expansion Planning (DNEP) problem that has to be solved by distribution network operators to decide which, where, and/or when enhancements to electricity networks should be introduced to satisfy the future power demands. Because of many real-world details involved, the structure of the problem is not exploited easily using mathematical programming techniques, for which reason we consider solving this problem with evolutionary algorithms (EAs). We compare three types of EAs for optimizing expansion plans: the classic genetic algorithm (GA), the estimation-of-distribution algorithm (EDA), and the Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA). Not fully knowing the structure of the problem, we study the effect of linkage learning through the use of three linkage models: univariate, marginal product, and linkage tree. We furthermore experiment with the impact of incorporating different levels of problem-specific knowledge in the variation operators. Experiments show that the use of problem-specific variation operators is far more important for the classic GA to find high-quality solutions. In all EAs, the marginal product model and its linkage learning procedure have difficulty in capturing and exploiting the DNEP problem structure. GOMEA, especially when combined with the linkage tree structure, is found to have the most robust performance by far, even when an out-of-the-box variant is used that does not exploit problem-specific knowledge. Based on experiments, we suggest that when selecting optimization algorithms for power system expansion planning problems, EAs that have the ability to effectively model and efficiently exploit problem structures, such as GOMEA, should be given priority, especially in the case of black-box or grey-box optimization.
APA, Harvard, Vancouver, ISO, and other styles
43

Zhang, Hong-Xiang, Qian Wang, and Zhi-Bin Wen. "Spatial Genetic Structure of Prunus mongolica in Arid Northwestern China Based on RAD Sequencing Data." Diversity 13, no. 8 (August 23, 2021): 397. http://dx.doi.org/10.3390/d13080397.

Full text
Abstract:
The extensive range of sand deserts, gravel deserts, and recent human activities have shaped habitat fragmentation of relict and endangered plants in arid northwestern China. Prunus mongolica is a relict and endangered shrub that is mainly distributed in the study area. In the present study, population genomics was integrated with a species distribution model (SDM) to investigate the spatial genetic diversity and structure of P. mongolica populations in response to habitat fragmentation and create a proposal for the conservation of this endangered species. The results showed that the northern marginal populations were the first isolated from other populations. The SDM suggested that these marginal populations had low levels of habitat suitability during the glacial period. They could not obtain migration corridors, and thus possessed low levels of gene flow connection with other populations. Additionally, several populations underwent secondarily geographical isolation from other central populations, which preserved particular genetic lineages. Genetic diversity was higher in southern populations than in northern ones. It was concluded that long-term geographical isolation after historical habitat fragmentation promoted the divergence of marginal populations and refugial populations along mountains from other populations. The southern populations could have persisted in their distribution ranges and harbored higher levels of genetic diversity than the northern populations, whose distribution ranges fluctuated in response to paleoclimatic changes. We propose that the marginal populations of P. mongolica should be well considered in conservation management.
APA, Harvard, Vancouver, ISO, and other styles
44

Dimitriadis, Panayiotis, Aristoteles Tegos, and Demetris Koutsoyiannis. "Stochastic Analysis of Hourly to Monthly Potential Evapotranspiration with a Focus on the Long-Range Dependence and Application with Reanalysis and Ground-Station Data." Hydrology 8, no. 4 (December 1, 2021): 177. http://dx.doi.org/10.3390/hydrology8040177.

Full text
Abstract:
The stochastic structures of potential evaporation and evapotranspiration (PEV and PET or ETo) are analyzed using the ERA5 hourly reanalysis data and the Penman–Monteith model applied to the well-known CIMIS network. The latter includes high-quality ground meteorological samples with long lengths and simultaneous measurements of monthly incoming shortwave radiation, temperature, relative humidity, and wind speed. It is found that both the PEV and PET processes exhibit a moderate long-range dependence structure with a Hurst parameter of 0.64 and 0.69, respectively. Additionally, it is noted that their marginal structures are found to be light-tailed when estimated through the Pareto–Burr–Feller distribution function. Both results are consistent with the global-scale hydrological-cycle path, determined by all the above variables and rainfall, in terms of the marginal and dependence structures. Finally, it is discussed how the existence of, even moderate, long-range dependence can increase the variability and uncertainty of both processes and, thus, limit their predictability.
APA, Harvard, Vancouver, ISO, and other styles
45

Kossieris, Panagiotis, Ioannis Tsoukalas, Christos Makropoulos, and Dragan Savic. "Simulating Marginal and Dependence Behaviour of Water Demand Processes at Any Fine Time Scale." Water 11, no. 5 (April 27, 2019): 885. http://dx.doi.org/10.3390/w11050885.

Full text
Abstract:
Uncertainty-aware design and management of urban water systems lies on the generation of synthetic series that should precisely reproduce the distributional and dependence properties of residential water demand process (i.e., significant deviation from Gaussianity, intermittent behaviour, high spatial and temporal variability and a variety of dependence structures) at various temporal and spatial scales of operational interest. This is of high importance since these properties govern the dynamics of the overall system, while prominent simulation methods, such as pulse-based schemes, address partially this issue by preserving part of the marginal behaviour of the process (e.g., low-order statistics) or neglecting the significant aspect of temporal dependence. In this work, we present a single stochastic modelling strategy, applicable at any fine time scale to explicitly preserve both the distributional and dependence properties of the process. The strategy builds upon the Nataf’s joint distribution model and particularly on the quantile mapping of an auxiliary Gaussian process, generated by a suitable linear stochastic model, to establish processes with the target marginal distribution and correlation structure. The three real-world case studies examined, reveal the efficiency (suitability) of the simulation strategy in terms of reproducing the variety of marginal and dependence properties encountered in water demand records from 1-min up to 1-h.
APA, Harvard, Vancouver, ISO, and other styles
46

Yang, Jincheng, Xinqu Xia, and Mu Zhang. "A Study on Economic Spatial Structure of Urban Agglomerations in Guangdong-Hong Kong-Macao Greater Bay Area." International Journal of Business and Management 13, no. 10 (September 6, 2018): 63. http://dx.doi.org/10.5539/ijbm.v13n10p63.

Full text
Abstract:
Based on the multi-index data of 11 cities in Guangdong-Hong Kong-Macao Greater Bay in 2016, the urban economic quality was calculated by TOPSIS method. Applying the modified gravitational model, the economy spatial linkage characteristics of core city-to-periphery city and periphery city-to-periphery city were analyzed. In addition, based on the method of network density analysis, centrality measures, core-periphery structure analysis to make a further verification about facts carried out from spatial connection analysis. This study shows that the Guangdong-Hong Kong-Macao Bay has an obvious core-periphery structure, and the overall economic network connection of Greater Bay is not strong. Guangdong-Shenzhen-Hong Kong is the core urban agglomeration in the Greater Bay Area. Dongguan and Foshan are transforming from marginal cities to semi-marginal cities. The marginal cities are limited by geographical distance or the economic environment, which leads to their development far behind the overall development of the Greater Bay Area. Finally, combined with the new wooden barrel theory and location advantage analysis method, advices were carried out to build a higher-level of the Greater Bay Area in future by dividing the Greater Bay Area into three major urban agglomerations. Urban agglomerations were proposed to meet the resources and industrial demands of the core urban imperfections and drive the economic development of the marginal cities at the same time.
APA, Harvard, Vancouver, ISO, and other styles
47

Wu, Quran, James O’Malley, Susmita Datta, Raad Z. Gharaibeh, Christian Jobin, Margaret R. Karagas, Modupe O. Coker, et al. "MarZIC: A Marginal Mediation Model for Zero-Inflated Compositional Mediators with Applications to Microbiome Data." Genes 13, no. 6 (June 11, 2022): 1049. http://dx.doi.org/10.3390/genes13061049.

Full text
Abstract:
Background: The human microbiome can contribute to pathogeneses of many complex diseases by mediating disease-leading causal pathways. However, standard mediation analysis methods are not adequate to analyze the microbiome as a mediator due to the excessive number of zero-valued sequencing reads in the data and that the relative abundances have to sum to one. The two main challenges raised by the zero-inflated data structure are: (a) disentangling the mediation effect induced by the point mass at zero; and (b) identifying the observed zero-valued data points that are not zero (i.e., false zeros). Methods: We develop a novel marginal mediation analysis method under the potential-outcomes framework to address the issues. We also show that the marginal model can account for the compositional structure of microbiome data. Results: The mediation effect can be decomposed into two components that are inherent to the two-part nature of zero-inflated distributions. With probabilistic models to account for observing zeros, we also address the challenge with false zeros. A comprehensive simulation study and the application in a real microbiome study showcase our approach in comparison with existing approaches. Conclusions: When analyzing the zero-inflated microbiome composition as the mediators, MarZIC approach has better performance than standard causal mediation analysis approaches and existing competing approach.
APA, Harvard, Vancouver, ISO, and other styles
48

PAPANIKOLAOU, D., C. METAXAS, and G. CHRONIS. "Neotectonic structure of the Lakonikos gulf." Bulletin of the Geological Society of Greece 34, no. 1 (January 1, 2001): 297. http://dx.doi.org/10.12681/bgsg.17026.

Full text
Abstract:
Detailed single-channel seismic reflection survey has been carried out in the Lakonikos Gulf, Southern Péloponnèse, aiming to a better understanding of the neotectonic structure of the Lakonikos Basin. Our survey showed that, contrary to the model of a simple N-S asymmetric graben previously considered, a tectonic horst occurs within the tectonic graben of Lakonikos. A subsidence of more than 1000m is produced by the N-S marginal faults, whereas, the N-S faults creating the central horst structure within the Lakonikos graben are high-angle reverse faults, which have uplifted the sea bottom together with the Pleistocene and Holocene sediments by about 100m. Several E-W vertical transcurrent faults, with strike-slip motion deform the N-S structures. The central tectonic horst structure is very recent as the observed deformation of the Upper Pleistocene and Holocene sediments indicated and a transtensional geodynamic regime is suggested.
APA, Harvard, Vancouver, ISO, and other styles
49

Shahbaba, Babak, Bo Zhou, Shiwei Lan, Hernando Ombao, David Moorman, and Sam Behseta. "A Semiparametric Bayesian Model for Detecting Synchrony Among Multiple Neurons." Neural Computation 26, no. 9 (September 2014): 2025–51. http://dx.doi.org/10.1162/neco_a_00631.

Full text
Abstract:
We propose a scalable semiparametric Bayesian model to capture dependencies among multiple neurons by detecting their cofiring (possibly with some lag time) patterns over time. After discretizing time so there is at most one spike at each interval, the resulting sequence of 1s (spike) and 0s (silence) for each neuron is modeled using the logistic function of a continuous latent variable with a gaussian process prior. For multiple neurons, the corresponding marginal distributions are coupled to their joint probability distribution using a parametric copula model. The advantages of our approach are as follows. The nonparametric component (i.e., the gaussian process model) provides a flexible framework for modeling the underlying firing rates, and the parametric component (i.e., the copula model) allows us to make inferences regarding both contemporaneous and lagged relationships among neurons. Using the copula model, we construct multivariate probabilistic models by separating the modeling of univariate marginal distributions from the modeling of a dependence structure among variables. Our method is easy to implement using a computationally efficient sampling algorithm that can be easily extended to high-dimensional problems. Using simulated data, we show that our approach could correctly capture temporal dependencies in firing rates and identify synchronous neurons. We also apply our model to spike train data obtained from prefrontal cortical areas.
APA, Harvard, Vancouver, ISO, and other styles
50

Wen, Tianfu, Cong Jiang, and Xinfa Xu. "Nonstationary Analysis for Bivariate Distribution of Flood Variables in the Ganjiang River Using Time-Varying Copula." Water 11, no. 4 (April 10, 2019): 746. http://dx.doi.org/10.3390/w11040746.

Full text
Abstract:
Nonstationarity of univariate flood series has been widely studied, while nonstationarity of some multivariate flood series, such as discharge, water stage, and suspended sediment concentrations, has been studied rarely. This paper presents a procedure for using the time-varying copula model to describe the nonstationary dependence structures of two correlated flood variables from the same flood event. In this study, we focus on multivariate flood event consisting of peak discharge (Q), peak water stage (Z) and suspended sediment load (S) during the period of 1964–2013 observed at the Waizhou station in the Ganjiang River, China. The time-varying copula model is employed to analyze bivariate distributions of two flood pairs of (Z-Q) and (Z-S). The main channel elevation (MCE) and the forest coverage rate (FCR) of the basin are introduced as the candidate explanatory variables for modelling the nonstationarities of both marginal distributions and dependence structure of copula. It is found that the marginal distributions for both Z and S are nonstationary, whereas the marginal distribution for Q is stationary. In particular, the mean of Z is related to MCE, and the mean and variance of S are related to FCR. Then, time-varying Frank copula with MCE as the covariate has the best performance in fitting the dependence structures of both Z-Q and Z-S. It is indicated that the dependence relationships are strengthen over time associated with the riverbed down-cutting. Finally, the joint and conditional probabilities of both Z-Q and Z-S obtained from the best fitted bivariate copula indicate that there are obvious nonstationarity of their bivariate distributions. This work is helpful to understand how human activities affect the bivariate flood distribution, and therefore provides supporting information for hydraulic structure designs under the changing environments.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography