To see the other types of publications on this topic, follow the link: Estimation non-Asymptotique et robuste.

Journal articles on the topic 'Estimation non-Asymptotique et robuste'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 45 journal articles for your research on the topic 'Estimation non-Asymptotique et robuste.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Arnaud, P., and J. Lavabre. "La modélisation stochastique des pluies horaires et leur transformation en débits pour la prédétermination des crues." Revue des sciences de l'eau 13, no. 4 (April 12, 2005): 441–62. http://dx.doi.org/10.7202/705402ar.

Full text
Abstract:
Pour étudier les distributions de fréquences des variables hydrologiques (pluies et débits) au pas de temps horaire, une méthodologie associant un générateur de chroniques de pluies horaires et un modèle conceptuel global de transformation de la pluie en débit a été développée. Sur une période de simulation donnée, la méthode génère une collection de scénarios de crues vraisemblables utilisée en prédétermination des risques hydrologiques. Les distributions de fréquences des variables hydrologiques sont construites empiriquement à partir des événements de pluies et de crues générés. L'extrapolation des distributions de fréquences des variables hydrologiques vers les fréquences rares se fait de façon empirique en augmentant la période de simulation, et non plus sur l'ajustement direct des distributions observées. Le principe de cette méthode (appelée SHYPRE : Simulation d'HYdrogrammes pour la PREdétermination) est donc d'utiliser les observations pour décrire le phénomène, afin de le reproduire statistiquement et de s'affranchir ainsi du manque d'observation. Son utilisation permet une estimation originale des quantiles de crues de fréquences courantes à rares et présente l'intérêt d'obtenir une information temporelle complète sur ces crues. De plus, on montre que l'approche fournit une estimation de quantiles de crues bien plus robuste que les ajustements statistiques des distributions observées, même pour les événements de fréquences courantes. Cette robustesse provient d'une meilleure prise en compte de l'information pluviométrique et de la stabilité de la paramétrisation du modèle pluie-débit.
APA, Harvard, Vancouver, ISO, and other styles
2

Dahmen, G., and A. Ziegler. "Independence Estimating Equations for Controlled Clinical Trials with Small Sample Sizes." Methods of Information in Medicine 45, no. 04 (2006): 430–34. http://dx.doi.org/10.1055/s-0038-1634100.

Full text
Abstract:
Summary Objectives: The application of independence estimating equations (IEE) for controlled clinical trials (CCTs) has recently been discussed, and recommendations for its use have been derived for testing hypotheses. The robust estimator of variance has been shown to be liberal for small sample sizes. Therefore a series of modifications has been proposed. In this paper we systematically compare confidence intervals (CIs) proposed in the literature for situations that are common in CCTs. Methods: Using Monte-Carlo simulation studies, we compared the coverage probabilities of CIs and non-convergence probabilities for the parameters of the mean structure for small samples using modifications of the variance estimator proposed by Mancl and de Rouen [7], Morel et al. [8] and Pan [3]. Results: None of the proposed modifications behave well in each investigated situation. For parallel group designs with repeated measurements and binary response the method proposed by Pan maintains the nominal level. We observed non-convergence of the IEE algorithm in up to 10% of the replicates depending on response probabilities in the treatment groups. For comparing slopes with continuous responses, the approach of Morel et al. can be recommended. Conclusions: Results of non-convergence probabilities show that IEE should not be used in parallel group designs with binary endpoints and response probabilities close to 0 or 1. Modifications of the robust variance estimator should be used for sample sizes up to 100 clusters for CI estimation.
APA, Harvard, Vancouver, ISO, and other styles
3

Carrara, Nicholas, and Jesse Ernst. "On the Estimation of Mutual Information." Proceedings 33, no. 1 (January 15, 2020): 31. http://dx.doi.org/10.3390/proceedings2019033031.

Full text
Abstract:
In this paper we focus on the estimation of mutual information from finite samples ( X × Y ) . The main concern with estimations of mutual information (MI) is their robustness under the class of transformations for which it remains invariant: i.e., type I (coordinate transformations), III (marginalizations) and special cases of type IV (embeddings, products). Estimators which fail to meet these standards are not robust in their general applicability. Since most machine learning tasks employ transformations which belong to the classes referenced in part I, the mutual information can tell us which transformations are most optimal. There are several classes of estimation methods in the literature, such as non-parametric estimators like the one developed by Kraskov et al., and its improved versions. These estimators are extremely useful, since they rely only on the geometry of the underlying sample, and circumvent estimating the probability distribution itself. We explore the robustness of this family of estimators in the context of our design criteria.
APA, Harvard, Vancouver, ISO, and other styles
4

Kunitomo, Naoto, Naoki Awaya, and Daisuke Kurisu. "Comparing estimation methods of non-stationary errors-in-variables models." Japanese Journal of Statistics and Data Science 3, no. 1 (June 15, 2019): 73–101. http://dx.doi.org/10.1007/s42081-019-00051-1.

Full text
Abstract:
AbstractWe investigate the estimation methods of the multivariate non-stationary errors-in-variables models when there are non-stationary trend components and the measurement errors or noise components. We compare the maximum likelihood (ML) estimation and the separating information maximum likelihood (SIML) estimation. The latter was proposed by Kunitomo and Sato (Trend, seasonality and economic time series: the nonstationary errors-in-variables models. MIMS-RBP-SDS-3, MIMS, Meiji University. http://www.mims.meiji.ac.jp/, 2017) and Kunitomo et al. (Separating information maximum likelihood method for high-frequency financial data. Springer, Berlin, 2018). We have found that the Gaussian likelihood function can have non-concave shape in some cases and the ML method does work only when the Gaussianity of non-stationary and stationary components holds with some restrictions such as the signal–noise variance ratio in the parameter space. The SIML estimation has the asymptotic robust properties in more general situations. We explore the finite sample and asymptotic properties of the ML and SIML methods for the non-stationary errors-in variables models.
APA, Harvard, Vancouver, ISO, and other styles
5

Hermans, Lisa, Anna Ivanova, Cristina Sotto, Geert Molenberghs, Geert Verbeke, and Michael G. Kenward. "Doubly robust pseudo-likelihood for incomplete hierarchical binary data." Statistical Modelling 20, no. 1 (November 30, 2018): 42–57. http://dx.doi.org/10.1177/1471082x18808611.

Full text
Abstract:
Missing data is almost inevitable in correlated-data studies. For non-Gaussian outcomes with moderate to large sequences, direct-likelihood methods can involve complex, hard-to-manipulate likelihoods. Popular alternative approaches, like generalized estimating equations, that are frequently used to circumvent the computational complexity of full likelihood, are less suitable when scientific interest, at least in part, is placed on the association structure; pseudo-likelihood (PL) methods are then a viable alternative. When the missing data are missing at random, Molenberghs et al. (2011, Statistica Sinica, 21,187–206) proposed a suite of corrections to the standard form of PL, taking the form of singly and doubly robust estimators. They provided the basis and exemplified it in insightful yet primarily illustrative examples. We here consider the important case of marginal models for hierarchical binary data, provide an effective implementation and illustrate it using data from an analgesic trial. Our doubly robust estimator is more convenient than the classical doubly robust estimators. The ideas are illustrated using a marginal model for a binary response, more specifically a Bahadur model.
APA, Harvard, Vancouver, ISO, and other styles
6

Cho, Hong-Yeon, Weon Mu Jeong, Ju Whan Kang, and Gi-Seop Lee. "Design Wave Period Estimation Using the Wave Height Information." Journal of Korean Society of Coastal and Ocean Engineers 35, no. 4 (August 31, 2023): 84–94. http://dx.doi.org/10.9765/kscoe.2023.35.4.84.

Full text
Abstract:
The wave height and period regression curve is widely used to estimate the design wave period. In this study, the parameters of the curves are estimated, compared, and evaluated using the linear, robust linear, and nonlinear regression methods, respectively. The data used in the design wave height estimation are the annual maxima (AM) wave height and period data sets divided by typhoon and non-typhoon conditions, provided by the Ministry of Oceans and Fisheries (2019). The estimation parameters show significant differences in the local coastal waters and the estimation methods. The estimation parameters based on the Suh et al. (2008, 2010) method show the apparent bias, under-estimation in the intercept (scale) parameter, and over-estimation in the slope (exponent) parameter, respectively.
APA, Harvard, Vancouver, ISO, and other styles
7

Cervellati, Matteo, Florian Jung, Uwe Sunde, and Thomas Vischer. "Income and Democracy: Comment." American Economic Review 104, no. 2 (February 1, 2014): 707–19. http://dx.doi.org/10.1257/aer.104.2.707.

Full text
Abstract:
Acemoglu et al. (2008) document that the correlation between income per capita and democracy disappears when including time and country fixed effects. While their results are robust for the full sample, we find evidence for significant but heterogeneous effects of income on democracy: negative for former colonies, but positive for non-colonies. Within the sample of colonies we detect heterogeneous effects related to colonial history and early institutions. The zero mean effect estimated by Acemoglu et al. (2008) is consistent with effects of opposite signs in the different subsamples. Our findings are robust to the use of alternative data and estimation techniques. (JEL D72, O17, O47)
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Shin-Liang, Huan-Yu Ku, Wenbin Zhou, Jordi Tura, and Yueh-Nan Chen. "Robust self-testing of steerable quantum assemblages and its applications on device-independent quantum certification." Quantum 5 (September 28, 2021): 552. http://dx.doi.org/10.22331/q-2021-09-28-552.

Full text
Abstract:
Given a Bell inequality, if its maximal quantum violation can be achieved only by a single set of measurements for each party or a single quantum state, up to local unitaries, one refers to such a phenomenon as self-testing. For instance, the maximal quantum violation of the Clauser-Horne-Shimony-Holt inequality certifies that the underlying state contains the two-qubit maximally entangled state and the measurements of one party contains a pair of anti-commuting qubit observables. As a consequence, the other party automatically verifies the set of states remotely steered, namely the "assemblage", is in the eigenstates of a pair of anti-commuting observables. It is natural to ask if the quantum violation of the Bell inequality is not maximally achieved, or if one does not care about self-testing the state or measurements, are we capable of estimating how close the underlying assemblage is to the reference one? In this work, we provide a systematic device-independent estimation by proposing a framework called "robust self-testing of steerable quantum assemblages". In particular, we consider assemblages violating several paradigmatic Bell inequalities and obtain the robust self-testing statement for each scenario. Our result is device-independent (DI), i.e., no assumption is made on the shared state and the measurement devices involved. Our work thus not only paves a way for exploring the connection between the boundary of quantum set of correlations and steerable assemblages, but also provides a useful tool in the areas of DI quantum certification. As two explicit applications, we show 1) that it can be used for an alternative proof of the protocol of DI certification of all entangled two-qubit states proposed by Bowles et al., and 2) that it can be used to verify all non-entanglement-breaking qubit channels with fewer assumptions compared with the work of Rosset et al.
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Bin, Xuehe Lu, Shaoqiang Wang, Jing M. Chen, Yang Liu, Hongliang Fang, Zhenhai Liu, et al. "Evaluation of Clumping Effects on the Estimation of Global Terrestrial Evapotranspiration." Remote Sensing 13, no. 20 (October 12, 2021): 4075. http://dx.doi.org/10.3390/rs13204075.

Full text
Abstract:
In terrestrial ecosystems, leaves are aggregated into different spatial structures and their spatial distribution is non-random. Clumping index (CI) is a key canopy structural parameter, characterizing the extent to which leaf deviates from the random distribution. To assess leaf clumping effects on global terrestrial ET, we used a global leaf area index (LAI) map and the latest version of global CI product derived from MODIS BRDF data as well as the Boreal Ecosystem Productivity Simulator (BEPS) to estimate global terrestrial ET. The results show that global terrestrial ET in 2015 was 511.9 ± 70.1 mm yr−1 for Case I, where the true LAI and CI are used. Compared to this baseline case, (1) global terrestrial ET is overestimated by 4.7% for Case II where true LAI is used ignoring clumping; (2) global terrestrial ET is underestimated by 13.0% for Case III where effective LAI is used ignoring clumping. Among all plant functional types (PFTs), evergreen needleleaf forests were most affected by foliage clumping for ET estimation in Case II, because they are most clumped with the lowest CI. Deciduous broadleaf forests are affected by leaf clumping most in Case III because they have both high LAI and low CI compared to other PFTs. The leaf clumping effects on ET estimation in both Case II and Case III is robust to the errors in major input parameters. Thus, it is necessary to consider clumping effects in the simulation of global terrestrial ET, which has considerable implications for global water cycle research.
APA, Harvard, Vancouver, ISO, and other styles
10

Nourou, Mohammadou, and Bybert Moudjare Helgath. "Vulnérabilité aux changements climatiques et croissance économique dans les pays du Golfe de Guinée : Preuve à l’aide du modèle de variables instrumentales à longue période." International Journal of Financial Studies, Economics and Management 1, no. 3 (November 25, 2022): 29–49. http://dx.doi.org/10.61549/ijfsem.v1i3.58.

Full text
Abstract:
L'objectif de cet article est d'étudier l'impact de la vulnérabilité au changement climatique sur la croissance économique. Nous répondons à nos préoccupations en utilisant le modèle de variables instrumentales à long terme de Kripfganz et Sarafidis (2021). Nous testons ensuite l'hypothèse de Kuznets (EKC) en utilisant le modèle à seuil de Hansen (1999). Notre étude porte sur 10 pays pour la période 1995-2020. Les estimations de Kripfganz et Sarafidis (2021) utilisant ce modèle montrent que la vulnérabilité au changement climatique augmente considérablement la croissance économique à un seuil de 1 % d'environ 43,84 %. De plus, les résultats non linéaires utilisant le modèle de Hansen (1999) montrent que : (i) dans les régimes inférieurs, une augmentation de 1 % de la vulnérabilité augmente significativement la croissance économique de 5,242 % au seuil de 5 %. (ii) Sous le régime supérieur, une augmentation de 1 % de la vulnérabilité augmente significativement l'augmentation du seuil de 1 % de 5,856 %. Par conséquent, les résultats du modèle de Hansen (1999) sont cohérents avec ceux du modèle de Kripfganz et Sarafidis (2021). Ces résultats sont robustes aux évaluations de sensibilité et aux estimations des effets fixes.
APA, Harvard, Vancouver, ISO, and other styles
11

Atallah, Gamal. "Les impôts sur le revenu et l’offre de travail des femmes mariées : une revue de la littérature." L'Actualité économique 74, no. 1 (February 9, 2009): 95–128. http://dx.doi.org/10.7202/602253ar.

Full text
Abstract:
RÉSUMÉ L’article effectue un survol de la littérature analysant les effets des impôts sur le revenu, sur l’offre de travail des femmes mariées. Les impôts introduisent des non-linéarités et des non-convexités dans l’ensemble budgétaire. Le modèle de base a bénéficié de plusieurs extensions, telles que les contraintes sur les heures, l’offre d’emploi intertemporelle, et les décisions simultanées des époux. Les approches économétriques basées sur la linéarisation de la contrainte budgétaire sont moins populaires aujourd’hui, alors que les méthodes basées sur la contrainte budgétaire complète et sur les choix discrets sont en vogue. Les estimations les plus récentes montrent que les impôts ont des effets négatifs, mais limités, sur la participation et les heures de travail des femmes mariées. La prise en compte des contraintes sur les heures de travail ainsi que les méthodes d’estimation plus sophistiquées jouent un rôle important dans cette réévaluation à la baisse des désincitatifs fiscaux. Néanmoins, les résultats empiriques demeurent relativement peu robustes. Certaines implications des résultats pour la politique économique sont discutées.
APA, Harvard, Vancouver, ISO, and other styles
12

Helman, David, Itamar M. Lensky, Yagil Osem, Shani Rohatyn, Eyal Rotenberg, and Dan Yakir. "A biophysical approach using water deficit factor for daily estimations of evapotranspiration and CO<sub>2</sub> uptake in Mediterranean environments." Biogeosciences 14, no. 17 (September 7, 2017): 3909–26. http://dx.doi.org/10.5194/bg-14-3909-2017.

Full text
Abstract:
Abstract. Estimations of ecosystem-level evapotranspiration (ET) and CO2 uptake in water-limited environments are scarce and scaling up ground-level measurements is not straightforward. A biophysical approach using remote sensing (RS) and meteorological data (RS–Met) is adjusted to extreme high-energy water-limited Mediterranean ecosystems that suffer from continuous stress conditions to provide daily estimations of ET and CO2 uptake (measured as gross primary production, GPP) at a spatial resolution of 250 m. The RS–Met was adjusted using a seasonal water deficit factor (fWD) based on daily rainfall, temperature and radiation data. We validated our adjusted RS–Met with eddy covariance flux measurements using a newly developed mobile lab system and the single active FLUXNET station operating in this region (Yatir pine forest station) at a total of seven forest and non-forest sites across a climatic transect in Israel (280–770 mm yr−1). RS–Met was also compared to the satellite-borne MODIS-based ET and GPP products (MOD16 and MOD17, respectively) at these sites.Results show that the inclusion of the fWD significantly improved the model, with R = 0.64–0.91 for the ET-adjusted model (compared to 0.05–0.80 for the unadjusted model) and R = 0.72–0.92 for the adjusted GPP model (compared to R = 0.56–0.90 of the non-adjusted model). The RS–Met (with the fWD) successfully tracked observed changes in ET and GPP between dry and wet seasons across the sites. ET and GPP estimates from the adjusted RS–Met also agreed well with eddy covariance estimates on an annual timescale at the FLUXNET station of Yatir (266 ± 61 vs. 257 ± 58 mm yr−1 and 765 ± 112 vs. 748 ± 124 gC m−2 yr−1 for ET and GPP, respectively). Comparison with MODIS products showed consistently lower estimates from the MODIS-based models, particularly at the forest sites. Using the adjusted RS–Met, we show that afforestation significantly increased the water use efficiency (the ratio of carbon uptake to ET) in this region, with the positive effect decreasing when moving from dry to more humid environments, strengthening the importance of drylands afforestation. This simple yet robust biophysical approach shows promise for reliable ecosystem-level estimations of ET and CO2 uptake in extreme high-energy water-limited environments.
APA, Harvard, Vancouver, ISO, and other styles
13

Gheit, Salem. "A Stochastic Frontier Analysis of the Human Capital Effects on the Manufacturing Industries’ Technical Efficiency in the United States." Athens Journal of Business & Economics 8, no. 3 (May 26, 2022): 215–38. http://dx.doi.org/10.30958/ajbe.8-3-2.

Full text
Abstract:
This study seeks to establish substantive empirical evidence on the role of college and non-college labour in productivity through technical efficiency in the manufacturing sector in the U.S. economy. This investigation fits a Cobb-Douglas stochastic frontier function with inefficiency effects to a set of panel data for 15 manufacturing industries over the period from 1998 to 2019. The contribution of this paper lies in the application of the stochastic frontier analysis following the approach of Caudill et al. (1995) by estimating and testing stochastic frontier production functions, assuming the presence of heteroscedasticity in the one-sided error term (inefficiency), which provides robust estimates of the technical efficiency measures. This paper also contributes to the literature in the sense that it follows the Hadri (1999) approach and its extension for panel data, Hadri et al. (2003), assuming the existence of heteroscedasticity in both error terms (the one-sided inefficiency term and the two-sided symmetric random noise). The rationale for the double heteroscedasticity estimation is that it results in more accurate measures of the effects of the technical efficiency determinants. Therefore, it adds another layer of confidence in the economic analysis of the impact of human capital components on the manufacturing sector efficiency and by extension, its productivity. The stochastic frontier results show the effects of highly educated workers and low educated workers – proxied by college and non-college labour – on technical inefficiency. This is where the maximum likelihood estimates suggest that the increase in the percentage of the hours worked by college workers tends to contribute positively to technological efficiency in the U.S. manufacturing industries. While on the minus side, it can be noted that the rise in the share of the hours worked by non-college persons seems to have negative impact on efficiency in these industries. JEL Codes: J24, D24, C23, C24, Q12 Keywords: human capital, technical efficiency, stochastic frontier production, double heteroscedasticity, panel data
APA, Harvard, Vancouver, ISO, and other styles
14

Ngomeni, Arlende Flore, Emmanuel Lucien Nomo Bidzanga, Marie Louise Avana, Martin Ngankam Tchamba, Cédric Djomo Chimi, and Cédric Djomo Chimi. "Potentiel de séquestration du carbone des agroforêts à base de caféier robusta (Coffea canephora var. robusta) dans les bassins de production du Cameroun." International Journal of Biological and Chemical Sciences 15, no. 6 (February 23, 2022): 2652–64. http://dx.doi.org/10.4314/ijbcs.v15i6.31.

Full text
Abstract:
Une étude en vue d’évaluer le potentiel de séquestration du carbone des Agroforêts à base de Caféiers Robusta (ACR) a été menée dans quatre sites contrastés par leurs caractéristiques biophysiques et socioéconomiques, et appartenant à trois bassins de production du café au Cameroun. L’approche méthodologique a porté sur la collecte des données d’inventaires dans 120 ACR de différents âges (30 par site); lesquelles ont permis d’estimer le potentiel de séquestration du carbone de façon non-destructive en utilisant les équations allométriques. Le potentiel de séquestration du carbone évalués ont varié significativement entre les sites, de 67,84±45,41 t C.ha-1 à Ayos à 41,94±15,50 t C.ha-1 à Nkongsamba. Globalement, ces potentiels de séquestration du carbone diminuaient suivant le gradient d’anthropisation des sites d’étude. Les ACR des classes d’âge (années) de [30-45[et de [45-60[ont eu les potentiels de séquestration du carbone les plus élevés pour les Espèces associées (EA) et les caféiers respectivement. Le potentiel de séquestration du carbone des EA endogènes (19,46 t C.ha-1 ) était plus élevé que celui des EA introduites (6,32 t C.ha-1). Ces résultats montrent que la contribution des ACR dans les mécanismes d’atténuation des effets du changement climatique est une évidence qu’il faudrait capitaliser. English title: Carbon sequestration potential of robusta coffee agroforests (Coffea canephora var. robusta) in production basins of Cameroon A study to assess the carbon sequestration potential of Robusta coffee Agroforests (RCAs) was carried out in four sites, contrasted by their biophysical and socio-economic characteristics, and belonging to three coffee production basins in Cameroon. The methodological approach involves the collection of inventory from 120 RCAs of different ages, (30 per site), which allowed no-destructive estimation of carbon stocks using. The assessed carbon sequestration potentials evaluated varied significantly between sites, from 67.84 ± 45.41 Mg C.ha-1 in Ayos to 41.94 ± 15.50 Mg C.ha-1 in Nkongsamba. Overall, these carbon sequestration potentials decreased along the anthropization gradient of the study sites. RCAs of the [30-45[ and [45-60 [age classes (years) had the highest carbon sequestration potentials for associated species (Eas) and coffee trees respectively. The potential of carbon sequestration of the endogenous EA (19.46 Mg C.ha -1 ) was higher than that of the introduced EA (6.32 Mg C.ha -1). These results show that the contribution of RCAs in climate change mitigation mechanisms is evident and should be capitalized upon.
APA, Harvard, Vancouver, ISO, and other styles
15

Aziz, Sameen, Saleem Ullah, Bushra Mughal, Faheem Mushtaq, and Sabih Zahra. "Roman Urdu sentiment analysis using Machine Learning with best parameters and comparative study of Machine Learning algorithms." Pakistan Journal of Engineering and Technology 3, no. 2 (October 22, 2020): 172–77. http://dx.doi.org/10.51846/vol3iss2pp172-177.

Full text
Abstract:
People talks on the social media as they feel good and easy way to express their feelings about topic, post or product on the ecommerce websites. In the Asia mostly the people use the Roman Urdu language script for expressing their opinion about the topic. The Sentiment analysis of the Roman Urdu (Bilal et al. 2016)language processes is a big challenging task for the researchers because of lack of resources and its non-structured and non-standard syntax / script. We have collected the Dataset from Kaggle containing 21000 values with manually annotated and prepare the data for machine learning and then we apply different machine learning algorithms(SVM , Logistic regression , Random Forest, Naïve Bayes ,AdaBoost, KNN )(Bowers et al. 2018) with different parameters and kernels and with TFIDF(Unigram , Bigram , Uni-Bigram)(Pereira et al. 2018) from the algorithms we find the best fit algorithm , then from the best algorithm we choose 4 algorithms and combined them to deploy on the data set but after the deployment of the hyperparameters we get the best model build by the Support Vector Machine with linear kernel which are 80% accuracy and F1 score 0.79 precision 0.79 and recall is 0.78 with (Ezpeleta et al. 2018)Grid Search CV and CV is 5 fold. Then we perform experiments on the Robust linear Regression model estimation using (Huang, Gao, and Zhou 2018)(Chum and Matas 2008)RANSAC(random sample Consensus) that gives us the best estimators with 82.19%.
APA, Harvard, Vancouver, ISO, and other styles
16

Ahmad, Mahyudin, and Stephen G. Hall. "Trust-based social capital, economic growth and property rights: explaining the relationship." International Journal of Social Economics 44, no. 1 (January 9, 2017): 21–52. http://dx.doi.org/10.1108/ijse-11-2014-0223.

Full text
Abstract:
Purpose The purpose of this paper is to attest whether generalized trust variable is the best proxy for social capital in explaining the latter’s effect on economic growth in a panel setting. Via a specially formulated theoretical framework, the authors also test whether the growth-effect of social capital is direct or indirect, and if it is indirect, can property rights be the link between social capital and growth. Design/methodology/approach The authors begin with testing the robustness of generalized trust variable in explaining the effect of social capital on growth and property rights. The authors then propose a number of trust-alternative variables that are shown to contain an element of trust based on theoretical arguments drawn from previous studies, to proxy for social capital and re-estimate its effect on growth and property rights. In this study, the authors use panel estimation technique, hitherto has been limited in social capital studies, which are capable of reducing omitted variable bias and time-invariant heterogeneity compared to the commonly used cross-sectional estimation. Findings First, the authors find that generalized trust data obtained by the World Value Survey (WVS) are unable to yield sufficiently robust results in panel estimation due to missing observations problem. Using the proposed trust-alternative variables, the estimation results improve significantly and the authors are able to show that social capital is a deep determinant of growth and it is affecting growth via property rights channel. The findings also give supporting evidence to the primacy of informal rules and constraints as proposed by North (2005) over the political prominence theory by Acemoglu et al. (2005). Research limitations/implications Generalized trust data obtained from the WVS, frequently used in majority of social capital studies to measure social capital, yield highly non-robust results in panel estimation due to missing observations problem. Future studies in social capital intending to use panel estimation therefore need to find trust-alternative variables to proxy for social capital, and this paper has proposed four such variables. Originality/value The use of panel estimation technique extends the evidence of social capital significance to economic growth and property rights, since the previous social capital studies rely heavily on cross-sectional estimation technique. Due to the availability of annual observations of the trust-alternative variables, this paper is able to find better results as compared to estimation using generalized trust data.
APA, Harvard, Vancouver, ISO, and other styles
17

Hernández Mesa, María, Nicolas Pilia, Olaf Dössel, and Axel Loewe. "Influence of ECG Lead Reduction Techniques for Extracellular Potassium and Calcium Concentration Estimation." Current Directions in Biomedical Engineering 5, no. 1 (September 1, 2019): 69–72. http://dx.doi.org/10.1515/cdbme-2019-0018.

Full text
Abstract:
AbstractChronic kidney disease (CKD) affects 13% of the worldwide population and end stage patients often receive haemodialysis treatment to control the electrolyte concentrations. The cardiovascular death rate increases by 10% - 30% in dialysis patients than in general population. To analyse possible links between electrolyte concentration variation and cardiovascular diseases, a continuous non-invasive monitoring tool enabling the estimation of potassium and calcium concentration from features of the ECG is desired. Although the ECG was shown capable of being used for this purpose, the method still needs improvement. In this study, we examine the influence of lead reduction techniques on the estimation results of serum calcium and potassium concentrations.We used simulated 12 lead ECG signals obtained using an adapted Himeno et al. model. Aiming at a precise estimation of the electrolyte concentrations, we compared the estimation based on standard ECG leads with the estimation using linearly transformed fusion signals. The transformed signals were extracted from two lead reduction techniques: principle component analysis (PCA) and maximum amplitude transformation (Max- Amp). Five features describing the electrolyte changes were calculated from the signals. To reconstruct the ionic concentrations, we applied a first and a third order polynomial regression connecting the calculated features and concentration values. Furthermore, we added 30 dB white Gaussian noise to the ECGs to imitate clinically measured signals. For the noisefree case, the smallest estimation error was achieved with a specific single lead from the standard 12 lead ECG. For example, for a first order polynomial regression, the error was 0.0003±0.0767 mmol/l (mean±standard deviation) for potassium and -0.0036±0.1710 mmol/l for calcium (Wilson lead V1). For the noisy case, the PCA signal showed the best estimation performance with an error of -0.003±0.2005 mmol/l for potassium and -0.0002±0.2040 mmol/l for calcium (both first order fit). Our results show that PCA as ECG lead reduction technique is more robust against noise than MaxAmp and standard ECG leads for ionic concentration reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
18

Widanalage, Dhammika, Oliver Queisser, Sabine Paarmann, and Thomas Wetzel. "(Invited) Estimating OCP-Temperature Dynamics for Determining Lithium-Ion Battery Entropy Coefficients." ECS Meeting Abstracts MA2023-02, no. 7 (December 22, 2023): 950. http://dx.doi.org/10.1149/ma2023-027950mtgabs.

Full text
Abstract:
The open-circuit potential (OCP) of a li-ion cell is a function of both state-of-charge and temperature. Characterising its dependence on temperature is important for thermal simulations as the derivative of the OCP with temperature acts as a reversible heat source term in the model. Known as the entropy coefficient, this derivative is a thermodynamic quantity that varies over the cell’s state-of-charge (SoC). The potentiometric approach [1] is widely used to characterise the entropy coefficient for a given SoC. This involves applying step changes in the temperature and allowing the cell OCP to reach (or approach) a steady-state value, which can take several hours (>8 hours), for a given SoC of interest. Robust techniques are required that can significantly reduce the experimental time and quantify the entropy coefficient over the full SoC window, leading to higher fidelity battery thermal models. In this work, the dynamics between the OCP and cell temperature are exploited via system identification techniques to derive the entropy coefficient. This is achieved by estimating the underlying kernel function between the cell OCP and temperature dynamics. The kernel function is a non-parametric estimate of the dynamics from which the entropy coefficient can be determined. In this work temperature signals are designed with multiple frequency components and applied to a cell via Peltier elements. The corresponding OCP response is then analysed, in the frequency domain, to estimate the kernel function. Unlike the potentiometric method, which drives the battery to steady-state and necessitates long experimental durations, the kernel function can be estimated under transients, facilitating faster characterisation. The figure demonstrates comparable results of the entropy coefficient, over the full SoC interval, when estimated via the kernel function and the potentiometric approach. The approach, compared to the potentiometric method, however, brings around a two to three times reduction in experimental time per SoC and gives insight into the OCP and temperature dynamics. The signal design, kernel estimation and entropy coefficient estimation procedures are detailed in this talk. Schmidt, J. P., Weber, A., et al., Electrochimica Acta 2014, 137, 311-319. DOI 10.1016/j.electacta.2014.05.153 Figure 1
APA, Harvard, Vancouver, ISO, and other styles
19

Pahwa, P., C. P. Karunanayake, J. McCrosky, and L. Thorpe. "Tendances longitudinales en matière de santé mentale parmi les groupes ethniques au Canada." Maladies chroniques et blessures au Canada 32, no. 3 (June 2012): 182–95. http://dx.doi.org/10.24095/hpcdp.32.3.07f.

Full text
Abstract:
Introduction L’immigration continue à transformer la composition ethnique de la population canadienne. Nous avons mené une enquête afin de déterminer si les tendances longitudinales en matière de détresse psychologique variaient entre sept groupes culturels et ethniques, et si la détresse psychologique au sein d’un même groupe ethnique variait en fonction de facteurs démographiques (statut d’immigrant, sexe, âge, état matrimonial, lieu et durée de la résidence), socioéconomiques (éducation, revenu), de soutien social et de style de vie. Methods La population étudiée était composée de 14 713 répondants de 15 ans et plus issus des six premiers cycles de l’Enquête nationale sur la santé de la population (ENSP); 20 % ont déclaré au moment du cycle 1 (1994-1995) être immigrants. Le modèle de régression logistique a été ajusté par la modification de la méthode de quasi-vraisemblance multivariée, et des estimations de variance robustes ont été obtenues à l’aide de méthodes de rééchantillonnage à répliques équilibrées. Results En nous fondant sur le modèle multivarié et les données autodéclarées, nous avons observé que les femmes étaient plus susceptibles que les hommes de déclarer une détresse psychologique modérée/élevée; il en était de même des répondants les plus jeunes par rapport aux répondants les plus vieux, des répondants célibataires par rapport aux répondants en couple, des citadins par rapport aux ruraux, des répondants moins éduqués par rapport aux répondants plus éduqués, des fumeurs – anciens et actuels – par rapport aux non-fumeurs et des personnes vivant dans un ménage fumeur par rapport à celles vivant dans un ménage non-fumeur. Le statut d’immigrant, le sexe, le score pour la participation à la vie sociale et l’éducation avaient une incidence sur la relation entre l’ethnicité et la détresse psychologique. Nous avons constaté – ce qui étaye d’autres études – une relation en U inversé entre la durée du séjour et la détresse psychologique : les répondants qui vivaient au Canada depuis moins de 2 ans étaient moins susceptibles de déclarer une détresse psychologique modérée/élevée, tandis que les répondants qui vivaient au Canada depuis 2 à 20 ans étaient beaucoup plus susceptibles de déclarer une détresse psychologique modérée/élevée que ceux y résidant depuis plus de 20 ans. Conclusion Il faut élaborer des programmes de santé mentale spécifiques en fonction de l’ethnicité et ciblant les personnes avec un niveau de scolarité peu élevé et participant peu à la vie sociale. En outre, les politiques et les programmes devraient cibler les femmes, les plus jeunes (groupe des 15 à 24 ans) et les personnes relevant des catégories de faible adéquation du revenu.
APA, Harvard, Vancouver, ISO, and other styles
20

Kazerooni, Anahita Fathi, Rachel Madhogarhia, Sherjeel Arif, Jeffrey B. Ware, Sina Bagheri, Debanjan Haldar, Hannah Anderson, et al. "NIMG-102. RAPNO-DEFINED SEGMENTATION AND VOLUMETRIC ASSESSMENT OF PEDIATRIC BRAIN TUMORS ON MULTI-PARAMETRIC MRI SCANS USING DEEP LEARNING; A ROBUST TOOL WITH POTENTIAL APPLICATION IN TUMOR RESPONSE ASSESSMENT." Neuro-Oncology 24, Supplement_7 (November 1, 2022): vii188—vii189. http://dx.doi.org/10.1093/neuonc/noac209.720.

Full text
Abstract:
Abstract Volumetric measurements of whole tumor and its components on MRI scans, facilitated by automatic segmentation tools, are essential to reduce inter-observer variability in monitoring tumor progression and response assessment for pediatric brain tumors. Here, we present a fully automatic segmentation model based on deep learning that reliably delineates the tumor components recommended by the Response Assessment in Pediatric Neuro-Oncology (RAPNO) working group for evaluation of treatment response. Multi-parametric MRI (mpMRI) scans (T1-pre, T1-post, T2, and T2-FLAIR), acquired on multiple MRI scanners with different field strengths and vendors, for a cohort of 218 pediatric patients with a variety of histologically confirmed brain tumor subtypes were collected. The mpMRI scans were co-registered and manually segmented by experienced neuroradiologists in consensus to identify the tumor subregions including the enhancing tumor (ET), non-enhancing tumor (NET), cystic components (CC), and peritumoral edema (ED) regions. A convolutional neural network model based on DeepMedic architecture was trained using mpMRI scans as the inputs for segmentation of the whole tumor and subregions. The trained model showed excellent performance in segmentation of the whole tumor, as suggested by median dice of 0.90/0.85 for validation (n = 44)/independent test (n = 22) sets. ET and non-enhancing components (union of NET, CC, and ED) were segmented with median dice scores of 0.78/0.84 and 0.76/0.74 for validation/test sets, respectively. The automated and manual segmentations demonstrated strong agreement in estimating VASARI (Visually AcceSAble Rembrandt Images) MRI features with Pearson’s correlation coefficient R &gt; 0.75 (p &lt; 0.0001) for ET, NET, CC, and ED components. Our proposed automated segmentation method developed based on MRI scans acquired with different protocols, equipment, and from a variety of brain tumor subtypes, shows potential application for reliable and generalizable volumetric measurements which can be used for treatment response assessment in clinical trials.
APA, Harvard, Vancouver, ISO, and other styles
21

Derakhti, Morteza, and James T. Kirby. "Breaking-onset, energy and momentum flux in unsteady focused wave packets." Journal of Fluid Mechanics 790 (February 9, 2016): 553–81. http://dx.doi.org/10.1017/jfm.2016.17.

Full text
Abstract:
Breaking waves on the ocean surface transfer energy and momentum into currents and turbulence. What is less well understood, however, is the associated total loss of wave energy and momentum flux. Further, finding a robust and universal diagnostic parameter that determines the onset of breaking and its strength is still an open question. Derakhti & Kirby (J. Fluid Mech., vol. 761, 2014, pp. 464–506) have recently studied bubble entrainment and turbulence modulation by dispersed bubbles in isolated unsteady breaking waves using large-eddy simulation. In this paper, a new diagnostic parameter ${\it\xi}(t)$ is defined based on that originally proposed by Song & Banner (J. Phys. Oceanogr., vol. 32, 2002, pp. 2541–2558), and it is shown that using a threshold value of ${\it\xi}_{th}=0.05$, the new dynamic criteria is capable of detecting single and multiple breaking events in the considered packets. In addition, the spatial variation of the total energy and momentum flux in intermediate- and deep-water unsteady breaking waves generated by dispersive focusing is investigated. The accuracy of estimating these integral measures based on free surface measurements and using a characteristic wave group velocity is addressed. It is found that the new diagnostic parameter just before breaking, ${\it\xi}_{b}$, has a strong linear correlation with the commonly used breaking strength parameter $b$, suggesting that ${\it\xi}_{b}$ can be used to parameterize the averaged breaking-induced dissipation rate and its associated energy flux loss. It is found that the global wave packet time and length scales based on the spectrally weighted packet frequency proposed by Tian et al. (J. Fluid Mech., vol. 655, 2010, pp. 217–257), are the reasonable estimations of the time and length scales of the carrier wave in the packet close to the focal/break point. A global wave steepness, $S_{s}$, is defined based on these spectrally weighted scales, and its spatial variation across the breaking region is examined. It is shown that the corresponding values of $S_{s}$ far upstream of breaking, $S_{s0}$, have a strong linear correlation with respect to $b$ for the considered focused wave packets. The linear relation, however, cannot provide accurate estimations of $b$ in the range $b<5\times 10^{-3}$. A new scaling law given by $b=0.3(S_{s0}-0.07)^{5/2}$, which is consistent with inertial wave dissipation scaling of Drazen et al. (J. Fluid Mech., vol. 611, 2008, pp. 307–332), is shown to be capable of providing accurate estimates of $b$ in the full range of breaking intensities, where the scatter of data in the new formulation is significantly decreased compared with that proposed by Romero et al. (J. Phys. Oceanogr., vol. 42, 2012, pp. 1421–1444). Furthermore, we examine nonlinear interactions of different components in a focused wave packet, noting interactive effect on a characteristic wave group velocity in both non-breaking and breaking packets. Phase locking between spectral components is observed in the breaking region as well, and subsequently illustrated by calculating the wavelet bispectrum.
APA, Harvard, Vancouver, ISO, and other styles
22

THEOBALD, C. M., A. M. I. ROBERTS, M. TALBOT, and J. H. SPINK. "Estimation of economically optimum seed rates for winter wheat from series of trials." Journal of Agricultural Science 144, no. 4 (July 31, 2006): 303–16. http://dx.doi.org/10.1017/s0021859606006289.

Full text
Abstract:
The results of recent trials for winter wheat (Triticum aestivum L.) have influenced farming practice in the UK by encouraging the use of lower seed rates. Spink et al. (2000) have demonstrated that, particularly if sown early, wheat can compensate for reduced plant populations by increased tiller production.Results from seed-rate trials are usually analysed separately for each environment or each combination of environment and variety, and not combined into a single model. They therefore address the question of what the best seed rate would have been for each combination, rather than answer the more relevant question of what rate to choose for a future site. The current paper presents a Bayesian method for combining data from seed-rate trials and choosing optimum seed rates: this method can incorporate information on seed and treatment costs, crop value and covariates. More importantly, for use as an advisory tool, it allows incorporation of expert knowledge of the crop and of the target site.The method is illustrated using two series of trials: the first, carried out at two sites in 1997–99, investigated the effects of sowing date and variety in addition to seed rate. The second was conducted at seven sites in 2001–03 and included latitude and certain management factors. Recommended seed rates based on these series vary substantially with sowing date and latitude.Two non-linear dose-response functions are fitted to the data, the widely used exponential-plus-linear function and the inverse-quadratic function (Nelder 1966). The inverse-quadratic function is found to provide a better fit to the data than the exponential-plus-linear and the latter function gives estimated optimum rates which are as much as 40% lower. The economic consequences of using one function rather than the other are not great in these circumstances.The method is found to be robust to changes in the prior distribution and to other changes in the model used for dependence of yield on sowing date, latitude, variety and management factors.
APA, Harvard, Vancouver, ISO, and other styles
23

White, G. C., and J. E. Hines. "Computing and software." Animal Biodiversity and Conservation 27, no. 1 (June 1, 2004): 175–76. http://dx.doi.org/10.32800/abc.2004.27.0175.

Full text
Abstract:
The reality is that the statistical methods used for analysis of data depend upon the availability of software. Analysis of marked animal data is no different than the rest of the statistical field. The methods used for analysis are those that are available in reliable software packages. Thus, the critical importance of having reliable, up–to–date software available to biologists is obvious. Statisticians have continued to develop more robust models, ever expanding the suite of potential analysis methods available. But without software to implement these newer methods, they will languish in the abstract, and not be applied to the problems deserving them. In the Computers and Software Session, two new software packages are described, a comparison of implementation of methods for the estimation of nest survival is provided, and a more speculative paper about how the next generation of software might be structured is presented. Rotella et al. (2004) compare nest survival estimation with different software packages: SAS logistic regression, SAS non–linear mixed models, and Program MARK. Nests are assumed to be visited at various, possibly infrequent, intervals. All of the approaches described compute nest survival with the same likelihood, and require that the age of the nest is known to account for nests that eventually hatch. However, each approach offers advantages and disadvantages, explored by Rotella et al. (2004). Efford et al. (2004) present a new software package called DENSITY. The package computes population abundance and density from trapping arrays and other detection methods with a new and unique approach. DENSITY represents the first major addition to the analysis of trapping arrays in 20 years. Barker & White (2004) discuss how existing software such as Program MARK require that each new model’s likelihood must be programmed specifically for that model. They wishfully think that future software might allow the user to combine pieces of likelihood functions together to generate estimates. The idea is interesting, and maybe some bright young statistician can work out the specifics to implement the procedure. Choquet et al. (2004) describe MSURGE, a software package that implements the multistate capture–recapture models. The unique feature of MSURGE is that the design matrix is constructed with an interpreted language called GEMACO. Because MSURGE is limited to just multistate models, the special requirements of these likelihoods can be provided. The software and methods presented in these papers gives biologists and wildlife managers an expanding range of possibilities for data analysis. Although ease–of–use is generally getting better, it does not replace the need for understanding of the requirements and structure of the models being computed. The internet provides access to many free software packages as well as user–discussion groups to share knowledge and ideas. (A starting point for wildlife–related applications is (http://www.phidot.org).
APA, Harvard, Vancouver, ISO, and other styles
24

Desnoyers, Alexandra, Michelle Nadler, Ramy Saleh, and Eitan Amir. "Fragility index of trials supporting approval of anti-cancer drugs in common solid tumors." Journal of Clinical Oncology 38, no. 15_suppl (May 20, 2020): 2055. http://dx.doi.org/10.1200/jco.2020.38.15_suppl.2055.

Full text
Abstract:
2055 Background: The Fragility Index (FI) quantifies the reliability of positive trials by estimating the number of events which would change statistically significant results to non-significant results. Here, we calculate the FI of trials supporting approval of drugs for common solid tumors. Methods: We searched Drugs@FDA to identify randomized trials (RCT) supporting drug approvals by the US Food and Drug Administration between January 2009 and December 2019 in lung, breast, prostate, gastric and colon cancers. We adapted the FI framework (Walsh et al. J Clin Epidemiol 2014) to allow use of time to event data. First, we reconstructed survival tables from reported data using the Parmar Toolkit (Parmar et al. Stat Med 1998) and then calculated the number of events which would result in a non-significant effect for the primary endpoint of each trial. The FI was then compared quantitatively to the number of patients in each trial who withdrew consent or were lost to follow-up. Multivariable linear regression was used to explore association between RCT characteristics and the FI. Results: We identified 69 RCT with a median of 669 patients (range 123-4804) and 358 primary outcome events (range 56-884). The median FI was 26 (range 1-322). The FI was ≤10 in 21 trials (30%) and ≤20 in 31 trials (45%). Among the 69 RCT, the median number of patients who withdrew consent or were lost to follow up was 27 (range, 6-317). The number of patients who withdrew consent or were lost to follow-up was equal or greater than the FI in 42 trials (61%). There was statistically significant inverse association between FI and trial hazard ratio (p0,001) and a positive association with number of patients who were lost to follow-up or withdrew consent (p0,001). There was no association between trial sample size, year of approval or reported p-value and the FI. Conclusions: Statistical significance of trials supporting drug approval in common solid tumors relies often on a small number of events. In most trials the FI was lower than the number of patients lost to follow up or withdrawing consent. Post-approval randomized trials or real-world data analyses should be performed to ensure that effects observed in registration trials are robust.
APA, Harvard, Vancouver, ISO, and other styles
25

Arnason, N., and E. Cam. "Multi-state models: metapopulation and life history analyses." Animal Biodiversity and Conservation 27, no. 1 (June 1, 2004): 93–95. http://dx.doi.org/10.32800/abc.2004.27.0093.

Full text
Abstract:
Multi–state models are designed to describe populations that move among a fixed set of categorical states. The obvious application is to population interchange among geographic locations such as breeding sites or feeding areas (e.g., Hestbeck et al., 1991; Blums et al., 2003; Cam et al., 2004) but they are increasingly used to address important questions of evolutionary biology and life history strategies (Nichols & Kendall, 1995). In these applications, the states include life history stages such as breeding states. The multi–state models, by permitting estimation of stage–specific survival and transition rates, can help assess trade–offs between life history mechanisms (e.g. Yoccoz et al., 2000). These trade–offs are also important in meta–population analyses where, for example, the pre–and post–breeding rates of transfer among sub–populations can be analysed in terms of target colony distance, density, and other covariates (e.g., Lebreton et al. 2003; Breton et al., in review). Further examples of the use of multi–state models in analysing dispersal and life–history trade–offs can be found in the session on Migration and Dispersal. In this session, we concentrate on applications that did not involve dispersal. These applications fall in two main categories: those that address life history questions using stage categories, and a more technical use of multi–state models to address problems arising from the violation of mark–recapture assumptions leading to the potential for seriously biased predictions or misleading insights from the models. Our plenary paper, by William Kendall (Kendall, 2004), gives an overview of the use of Multi–state Mark–Recapture (MSMR) models to address two such violations. The first is the occurrence of unobservable states that can arise, for example, from temporary emigration or by incomplete sampling coverage of a target population. Such states can also occur for life history reasons, such as dormancy or the inability to capture non–breeders and in these cases, the rates of transition to and from the unobservable state provide life history insights. The second failure Kendall considers is the misclassification of states (for example in models involving states for age, sex, breeding condition, etc. where these cannot be determined without error). He reviews solutions for these that encompass three approaches: constraints on parameters to ensure identifiability (the least desireable solution); incorporating additional information; and the use of sub–sampling that leads to the multi–state application of the Robust design. In passing, Kendall makes reference to what are probably the 3 most significant developments in the area of multi–state models since the last Euring meeting: (1) the incorporation of tag–recovery data in addition to recapture data in MSMR models; (2) a comprehensive methodology for goodness–of–fit testing and assessing parameter identifiability in MSMR models; and (3) the development of new software to make these methods accessible. Much of (2) and (3) is based on the landmark thesis of Olivier Gimenez (Gimenez, 2003). Two further presentations in this session followed up the plenary theme of unobservable and misclassified states. The presentation by Roger Pradel (Pradel, in press), represented in these proceedings as a brief abstract only, dealt with the problem of errors in sexing animals; an example of what Kendall refers to as bidirectional misclassification. The presentation by Marc Kéry (Kéry & Gregg, 2004) is, we think, the first occurrence in the Euring proceedings of an application of mark–recapture to plants. This presentation, represented in these proceedings by an extended abstract with complete references, is an example of dormancy as an unobservable state. Even though the non–dormant states are observed with probability 1, MSMR permits reliable estimation of the proportion of dormant plants in the presence of state ambiguity (dormant or dead?) and can permit assessing the influence of environmental covariates on this proportion. The presentation by Christophe Barbraud (Barbraud & Weimerskirk, 2004), included in these proceedings as an extended abstract, takes the life–history trade–off application of MSMR models a step further by taking into account environmental and individual covariates on survival. The trade–off considered is between survival and transitions among several states describing breeding experience of long–lived petrels and how it is affected by harsh climate conditions. The study is a showcase for the powers of the new software (U–Care and M–Surge). The paper by Senar and Conroy (Senar & Conroy, 2004) is a novel application of MSMR models to animal epidemiology. States included age, sex and infected state and the model permits estimation of survival, infection, and recovery rates for birds during an outbreak of Serin avian pox. The use of a MSMR model permits estimation of the prevalence rate unconfounded by differences in capture rates of infected and non–infected birds. Here too there is a potential for ambiguous states in that the uninfected state might include both immune post–infection animals and susceptible pre–infection animals and these groups would likely have different survival rates. The authors are able to deal with this because of the length of the study and the availability of data outside the main outbreak. Finally, this session includes a paper by Jamieson and Brooks (Jamieson & Brooks, 2004) that appears to lie outside the MSMR theme of this session but which was included because of its relevanceto metapopulation analyses. Our call for papers for this session also invited papers illustrating multi– population meta–analysis and use of Bayesian methods. By these criteria, their paper is no outlier. It addresses the longstanding question of density dependence in game bird survival; a question of great interest to theoretical biologists and of vital importance to wildlife managers. The controversy arises because the density dependence revealed in estimates may not reflect the underlying density dependence mechanism in the population parameters. The Bayesian analysis presented here circumvents this problem by fitting a 2 stage model: a time series model for the true population sizes Nt allowing for density dependence and then for the distribution of the estimates given the Nt. The use of data–intensive sampling methods to fit this model neatly sidesteps the insurmountable problem for a purely frequentist (likelihood) approach of having to integrate out the Nt from the likelihood. (As a historical note, this problem confronted George Jolly when he developed the original Jolly Seber model and he handled it by simply fixing the unobservable marked pool sizes Mt a at their maximum likelihood values… a procedure that biases the s.e. of the estimates). This paper is also instructive as the Bayesian methodology provides a straightforward means of accounting for model uncertainty in parameter estimates and model predictions. In summary, the session was a gratifying and useful mix of overview and case studies. The case studies are valuable for their sophisticated use of multi–state and Bayesian models and the subtlety and care with which inferences are drawn from the model fitting results. Cite
APA, Harvard, Vancouver, ISO, and other styles
26

Noble, Jerald, Frederick L. Locke, Constanza Savid-Frontera, Michael D. Jain, Joel G. Turner, Julieta Abraham Miranda, Thushara W. Madanayake, et al. "CD19 Intron Retention Is Mechanism of CAR-T Treatment Resistance in Non-Hodgkin Lymphoma." Blood 142, Supplement 1 (November 28, 2023): 3506. http://dx.doi.org/10.1182/blood-2023-185074.

Full text
Abstract:
Background: Chimeric antigen receptor T-cell (CAR-T) therapy has become a viable treatment option for patients (pts) with relapsed or refractory diffuse large B-cell lymphoma (R/R DLBCL). Transcriptomic markers for CD19 directed CAR-T failure have been characterized in pre-treatment B-cell acute lymphoblastic leukemia (B-ALL) but not in DLBCL. Here, we report the identification of RNA alternative splicing (AS) events associated with CD19 directed CAR-T response in pre-treatment DLBCL pts and of RNA binding proteins (RNABP) regulating these AS events. Methods: To identify putative AS markers for CAR-T treatment outcome, we performed differential AS on bulk RNA-seq samples of pre-treatment tumor biopsies in pts that had a durable response (DR), defined via remission up to 9 months post-treatment, and pts that had a non-durable response (NDR) with rMATS-turbo (Shen et al. PNAS. 2014). Intron retention (IR) events within CD19 were quantified with a secondary, more robust algorithm to ensure reproducibility and accuracy. Survival analyses were done using the survival and survminer libraries in R. Validation assays for the downstream effects CD19 IR were conducted in EJ1, Jeko-1, OCI-Ly3, SU-DHL-6, and Toledo cell lines. The surface protein expression of CD19 was assayed via flow cytometry (FC) followed by Quantibrite PE (BD Biosciences, San Jose, CA) quantification. Total CD19 protein expression was measured via western blot (WB) in the aforementioned cell lines. Analyses of CD19 IR was done via RT-PCR followed by electrophoresis, and bands were quantified with iBright Analysis Software (Thermo Fisher Scientific, Carlsbad, CA). The effect of RNABP was assessed via the creation of CRISPR/Cas9 knockout (KO) cell lines followed by WB of the target gene and quantification of CD19 surface protein expression by FC. Results: Introns 2 and 6 of CD19 are more highly expressed in NDR pts. A combined metric estimating the proportion of normal CD19 isoforms that do not express introns 2 or 6 (normal CD19) was created and DR pts expressed normal CD19 isoforms significantly higher than NDR pts (Figure 1). Normal CD19 isoforms classification performance was assessed via leave one out cross validation and achieved an overall accuracy of 67.6%. Sensitivity (classifying NDR) was 61.1% and specificity (classifying DR) was 81.3%. The expression of normal CD19 relative to the median level within the cohort was significantly associated with progress free survival (Figure 2). The expression of CD19 intron 6 was negatively correlated with CD19 surface protein expression in DLBCL cell lines. Cell lines were sorted by cells that exhibited high CD19 surface expression (bright) and low expression (dim). Cell lines sorted into the dim category expressed higher levels of intron 6 than bright cells. The correlation of RNABP expression with CD19 intron 6 expression was done in this patient cohort and the NCICCR-DLBCL cohort. Six RNABP were significantly correlated in both cohorts (r 2 &gt; 0.2 and p-value &lt;0.01). NOP2, one of the RNABP that was positively correlated with intron 6 expression, was knocked out in Jeko-1 cell lines to validate the putative correlation with CD19 protein expression. The resulting knock out cells exhibited lower levels of intron 6 retention and higher CD19 protein expression by FC and WB than non-transformed cells. Conclusion: Pre-treatment tumors of R/R DLBCL patients who relapse before 9 months following CD19 directed CAR-T therapy have higher expression of CD19 isoforms that retain introns 2 and 6. The upregulated expression of the CD19 isoforms retaining these introns is correlated with CD19 surface protein expression and is a marker for sustained remission following CAR-T treatment. NOP2 is correlated with CD19 intron retention and surface protein expression and could be a target for modulating CD19 expression in conjunction with CAR-T treatment. Figure 1: The expression of normal CD19 isoforms in pre-treatment tumor samples of DR (red) and NDR patients (blue). ΔPSI denotes the difference in medians between the two groups. P-value yielded by a Wilcoxon test between the two groups is reported. Figure 2: Progress free survival (PFS) stratified by expression of normal CD19 isoforms. Red - above cohort median expression of normal CD19 isoforms. Blue - below cohort median expression of normal CD19 isoforms. P-value yielded via log-rank test.
APA, Harvard, Vancouver, ISO, and other styles
27

Grimaldi, Amedeo, Francesco Verducci, Elena Colombo, Andrea Casalegno, and Andrea Baricci. "Modelling Analysis of MEA Dynamic Operation Under Real-World Automotive Driving Cycle." ECS Meeting Abstracts MA2023-02, no. 37 (December 22, 2023): 1703. http://dx.doi.org/10.1149/ma2023-02371703mtgabs.

Full text
Abstract:
In last years, Proton Exchange Membrane Fuel Cells (PEMFC) have received an increasing interest as a candidate for light and heavy-duty vehicle application1. To guarantee an efficient and durable operation of PEMFC, the understanding of local conditions, experienced by MEA, during dynamic operation is highly important. Effective numerical modelling can help to gain insights on the local heterogeneity linked to water and thermal management of the cell. A 1 + 1D multiphase dynamic non isothermal PEMFC model, implemented into MATLAB Simulink environment, was used to investigate experimental data obtained on a segmented cell hardware 2. Particular attention was paid to properly model the mass transport and kinetic related phenomena in catalyst layer(CL), as the local oxygen transport resistance through the ionomer thin film3 and the platinum oxide formation/reduction reaction, and their mutual interaction. Exploiting experimental data, obtained from tests on the segmented cell hardware, it was possible to give a robust validation to the 1+1D model, both at global and local level, thanks to the spatially-resolved information available. By simulating real-world driving cycle (DLC) protocol, developed within the ID-FAST project4, drying-up periods of ionomer contained in catalyst layers and in membrane, as well as flooding periods of porous layers and channels, were identified and analyzed during low-load and high-load operation. The performance model was coupled with a catalyst durability model, such to investigate the effect of ageing on dynamic performance of PEMFC. An innovative semi-empirical electrochemical surface area (ECSA) loss model was developed, based on the platinum dissolution mechanism, and validated initially on accelerated stress tests on different operating conditions5. The simulated load profile and local operating conditions in CLs were processed by a proper algorithm, to convert a driving cycle into a combination of elementary steps and to associate a loss factor for the ECSA to each of the identified elements, thus to estimate the retained local and global ECSA. Exploiting experimental database made up of 1000 operating hours performing ID-FAST DLC, the effect of local operating conditions on PEMFC degradation and performance was furtherly comprehended, focusing mainly on the effect of reactant humidification. The developed model framework succeeded in estimating performance loss during driving cycle operation, as visible in Figure 1(a-b). Simulated and experimental ECSA decay over time is shown in Figure 1(c), demonstrating good model prediction capability. The obtained results showed the potentiality of the model to capture complex two-phase and non-isothermal transport phenomena both in the along channel and through-MEA direction, consistently with a large set of experimental data. Experimental data on single cell were collected under ID-FAST project (Grant Agreement No 779565, Joint Undertaking, EU Horizon 2020). References D. A. Cullen et al., Nat. Energy, 6, 462–474 (2021) https://www.nature.com/articles/s41560-021-00775-z. E. Colombo, A. Baricci, A. Bisello, L. Guetaz, and A. Casalegno, J. Power Sources, 553, 232246 (2023) https://doi.org/10.1016/j.jpowsour.2022.232246. T. Suzuki, H. Yamada, K. Tsusaka, and Y. Morimoto, J. Electrochem. Soc., 165, F166–F172 (2018) https://iopscience.iop.org/article/10.1149/2.0471803jes. F. Wilhelm et al., ID-FAST -D4.3 – Analysis of coupling between mechanisms and definition of combined ASTs, p. 49, (2021) https://www.id-fast.eu/uploads/media/ID-FAST_D4-3_Analysis_of_coupling_between_mechanisms_and_definition_of_combined_ASTs_OK.pdf. A. Kneer and N. Wagner, J. Electrochem. Soc., 166, F120–F127 (2019) https://iopscience.iop.org/article/10.1149/2.0641902jes. Figure 1
APA, Harvard, Vancouver, ISO, and other styles
28

Lim, Andrew B. M., Andrew W. Roberts, Kate Mason, Ashish R. Bajel, Jeffrey Szer, and David Ritchie. "Single-Centre Validation Of a Disease Risk Index For Estimating Survival and Relapse In Allogeneic Hematopoietic Stem Cell Transplant Recipients: Sample Size, Adequate Follow-Up, and Use Of Local Data Are Vital Considerations." Blood 122, no. 21 (November 15, 2013): 2143. http://dx.doi.org/10.1182/blood.v122.21.2143.2143.

Full text
Abstract:
Abstract Introduction Allogeneic hematopoietic stem cell transplantation (alloHSCT) is applied for a wide range of malignant hematologic diseases at a wide range of stages of activity, from durable complete remission to resistant relapse. Reporting transplant outcome by disease category alone, without considering disease activity, limits statistical analysis and interpretation of outcomes across time or institutions. Recently a tool for clustering different combinations of disease and disease stage – the Disease Risk Index (DRI) – was developed and validated to stratify for overall survival (OS), progression free survival (PFS) and cumulative incidence of relapse (CIR)1. Aims 1. To independently validate the DRI's ability to stratify patients for OS, PFS and CIR using data from our centre. 2. To determine if absolute estimates of OS, PFS and CIR derived from the original training cohort can be used to accurately predict OS, PFS and CIR for an independent, contemporaneous cohort. 3. To determine approximate thresholds of sample size and follow-up duration below which the DRI may fail to successfully stratify patients for OS, PFS and CIR. Methods From our institutional transplant database, we extracted data from 466 patients undergoing alloHSCT for hematologic malignancy at our institution between the years 2001 and 2011, with median survivor follow-up of 55.2 months (range 5.0-139.0) to validate and further explore the utility of the DRI. Data for time-to-event outcomes was locked for analysis on 14 February 2013. Results We identified that similar to the published findings, the DRI was able to significantly stratify for OS (at 4 years, OS for low DRI 81% [95% CI 70-94%]; intermediate DRI 68% [63-74%]; high DRI 41%, [32-51%]; very high DRI 0%; P < .001; see Figure 1), PFS (at 4 years, PFS for low DRI 72% [59-87%]; intermediate DRI 61% [55-67%]; high DRI 32% [24-42%]; very high DRI 0%; P < .001), and CIR (at 4 years, CIR for low DRI 14% [3-25%]; intermediate DRI 27% [21-32%]; high DRI 48% [39-57%]; not calculable for very high DRI; P < .001). The DRI was not predictive of non-relapse mortality (P = .379). The DRI retained its prognostic power when applied to subgroups of patients who received either myeloablative or non-ablative conditioning; non-T cell depleted transplants; and sibling donor transplants only. When compared with the original published training cohort, survival and relapse outcomes from a contemporaneous cohort (n = 324, alloHSCT between the years 2001 and 2008) from our institution were superior to those of the Boston cohort for low, intermediate and high DRI groups (4 year OS: low DRI 88% [95% CI 77-100%] vs 64% [56-70%]; intermediate DRI 68% [62-75%] vs 46% [42-50%]; high DRI 42% [33-53%] vs 26% [21-31%]; 4 year CIR: low DRI 12% [6-24%] vs 19% [13-24%]; intermediate DRI 26% [20-32%] vs 36% [33-40%]; high DRI 47% [37-57%] vs 55% [50-60%]). These findings underscore the importance of calibration with local data before using the DRI for predicting absolute rates of survival and relapse in individual institutions. To further explore if the DRI retained its power in smaller cohorts, we tested the DRI in smaller subsets of patients selected randomly from our data. We found the DRI successfully stratified a cohort of 100 patients (median survivor follow-up 51.1 months, range 8.5-139.0) for OS (P = .010), PFS (P = .016) and CIR (P = .027), but failed to stratify a cohort of 50 patients (median survivor follow-up 46.2 months, range 10.3-135.5) for survival (P = .385 for OS, P = .167 for PFS, P = .026 for CIR). Likewise, under simulated conditions of shorter follow-up, the DRI successfully stratified a cohort (n = 322) with median surviving follow-up of 40.6 months for OS and PFS, but failed to stratify a cohort (n = 242) with median surviving follow-up of 33.1 months. Conclusion We find the DRI to be a simple, practical and robust tool for pre-transplant risk stratification, and estimation of survival and relapse in alloHSCT recipients, when calibrated with local alloHSCT outcome data. However, users should be familiar with its limitations when applied to smaller cohorts or cohorts with shorter follow-up. References 1. Armand P, Gibson CJ, Cutler C et al. A disease risk index for patients undergoing allogeneic stem cell transplantation. Blood 2012;120:905-913. Disclosures: No relevant conflicts of interest to declare.
APA, Harvard, Vancouver, ISO, and other styles
29

Mesa, Ruben A., Chin Yang Li, Susan Schwager, and Ayalew Tefferi. "Increased Intramedullary T Cells Corresponds with Response to Thalidomide (THAL) Therapy in Primary Myelofibrosis." Blood 112, no. 11 (November 16, 2008): 5234. http://dx.doi.org/10.1182/blood.v112.11.5234.5234.

Full text
Abstract:
Abstract BACKGROUND: THAL has been used as immunomodulatory therapy alone (Tefferi et. al. Blood 200;96:4007), or with steroids (Mesa et al. Blood2003;101:2534), in patients with primary myelofibrosis (PMF). Clinical benefit has been limited to improvements in cytopenias and/or splenomegaly. However, major histologic remissions of intramedullary manifestations of disease have not typically been observed. The mechanism of action of THAL in MF remains uncertain, and may involve cytokine inhibition or immunomodulation. We have previously observed increased numbers of CD8+ T lymphocytes on marrow trephines of patients who responded to THAL, and hypothesized that alterations in the T-cell distribution and population may provide insight into an immunomodulatory response and correspond to subsequent response in PMF patients. Methods: We analyzed a cohort of PMF patients who had received THAL therapy (alone or in combination with corticosteroids) from our institution and analyzed their clinical course and response to THAL therapy. Marrow trephines were evaluated in a systematic fashion (CYL in a blinded fashion to clinical course) for cellularity and degree of reticulin fibrosis (0–4+). Subsequently, megakaryocytes were assessed for quantitative (increased, decreased, or normal) and qualitative (size (large or small) and presence or absence of morphologic dysplasia). T cells were assessed by immunohistochemical staining for CD8 with an emphasis on number of T cell clusters per trephine, and the pattern of those clusters (large vs. small). Finally, an estimation of numbers of T cells per high power histology field was conducted in 3 distinct areas, and an average per marrow trephine obtained. All histologic features were then compared with PMF prognostic scores, and labs at presentation. Amongst those who received THAL, baseline histology was compared to response and in those patients with serial marrows longitudinal changes were assessed. Results: A cohort of 140 patients with PMF was analyzed, 65 having received THAL along the course of their illness (53 (82%) along with prednisone) and 75 from a comparison group who never received THAL. Patients from both groups had similar PMF prognostic scores and presentations. Comparison of baseline histologic features demonstrated that individuals with a baseline increases in T cell numbers (&gt;100 per high power field) were significantly more likely to subsequently respond to THAL therapy (p&lt;0.01). Additionally, analysis of megakaryocyte morphology demonstrated that those patients in which megakaryocytes were atypical for PMF (i.e. small) were significantly more likely to have thrombocytopenia and were less likely to respond to THAL (p=0.05). Independent assessment of the prognostic value of these individual histologic features upon survival failed to reveal any direct impact upon survival by Kaplan-Meier Analysis. Table 1. Comparison of Histologic Features in 140 patients with Myelofibrosis Group Cell Fibrosis ≥3 Meg Numbers Meg Size T Cell clusters T-Cells (per HPF) &gt;100 Tcells (per HPF) Control (N=75) 72% (10–100) 33% 91% Increased 21% Small 4 (0–16) median 80 (13–258) 37% THAL ALL (N=65) 60% (5–95) 62% 79% Increased 38% Small 3 (0–14) median 93 (20–500) 43% -THAL Responders (N=28) 60% (5–95) 56% 82% Increased 22% Small (p=0.05) 5 (0–10) median 113 (45–198) 61% (p&lt;0.01) -THAL Non Responders (N=37) 60 (5–95) 67% 76% Increased 48% Small 3 (0–14) median 70 (20–500) 26% A subgroup of patients who received THAL (N=12) had a second post therapy marrow available for analysis for comparison of THAL effects upon the marrow, although the numbers were small, there was no statistically significant differences upon histologic parameters analyzed including cellularity, degree of fibrosis, megakaryocyte morphology, or T cell clustering or numbers. Inclusion of prednisone in the regimen had no impact on histologic prognostic observations. Conclusions: THAL therapy for MF patients can be associated with significant clinical benefit for cytopenias and splenomegaly, and interestingly patients with increased marrow T-cell populations at baseline have a clear increased likelihood of response. Although this analysis is unable to gauge mechanism, it would suggests a greater potential for immunomodulation in patients with a robust T-cell population. Additionally, analysis of megakaryocytes suggests the presence of small megakaryocytes may in contrast represent a more resistant phenotype of disease with more thrombocytopenia and THAL resistance.
APA, Harvard, Vancouver, ISO, and other styles
30

McGann, Patrick T., Omar Niss, Min Dong, Anu Marahatta, Tomoyuki Mizuno, Karen Kalinyak, Theodosia A. Kalfa, et al. "Clinical and Laboratory Benefits of Early Initiation of Hydroxyurea with Pharmacokinetic Guided Dosing for Young Children with Sickle Cell Anemia." Blood 132, Supplement 1 (November 29, 2018): 507. http://dx.doi.org/10.1182/blood-2018-99-112909.

Full text
Abstract:
Abstract Background: Hydroxyurea is now the standard of care for children with sickle cell anemia (SCA). Results from the BABY HUG study and recommendations from the 2014 NHLBI Guidelines have led to early initiation (increasingly before 1 year of age) of hydroxyurea for many patients. Given the known variability in hydroxyurea pharmacokinetics (PK), treatment response (HbF%), and maximum tolerated dose (MTD), we hypothesized that individualized dosing would provide the optimal treatment approach. Methods: The Therapeutic Response Evaluation and Adherence Trial (TREAT, ClinicalTrials.gov NCT02286154) is a prospective study of a personalized, PK-guided dosing model of hydroxyurea for children with SCA. Using population PK model-based Bayesian estimation, each participant's PK data are used to generate an individualized starting hydroxyurea dose that targets an area under the curve associated with actual MTD. Clinical follow-up and subsequent dose adjustments target MTD, usually defined by ANC<3.0x109/L. We analyzed clinical and laboratory data for TREAT participants who started hydroxyurea before 2 years of age, to allow for comparison to published results from BABY HUG, which included a similar young cohort but with conservative weight-based dosing of 20 mg/kg/day. TREAT participants had ongoing clinical and research evaluations of organ function, including transcranial doppler (TCD) studies, RBC pit counts, and cystatin C measurements. Results:The analysis of children starting hydroxyurea before 2 years of age included 33 participants (of 47 total TREAT enrollments), who contributed a total of 59.5 patient-years of hydroxyurea therapy. The mean age (±SD) at hydroxyurea initiation was 1.0±0.4 years of age. The average PK-guided, individualized starting dose was 27.8±5.3 mg/kg/day, higher than conventional and BABY HUG initial dosing (20 mg/kg/day). For children who have completed 12 months of therapy (n=24), effects in hematologic laboratory data are remarkable with average 35.9±8.9% HbF and hemoglobin concentration of 10.2±1.1 g/dL after 12 months of therapy (compared to 29.3±8.8% and 9.2±1.3 g/dL at baseline). The majority (70%) of these participants have HbF>30% and almost half achieved HbF>40% after 12 months of hydroxyurea. This hematological response is more robust than what was observed in BABY HUG (HbF=22.4%, Hb=9.1 g/dL after two years of therapy, Wang WC et al. Lancet 2011). In the TREAT cohort, there were no episodes of dactylitis, acute splenic sequestration, or stroke. There were 111 emergency room or sick outpatient clinic visits for this young cohort; 107 ED/clinic visits (without subsequent hospitalization) were for fever, URI symptoms, GI illness, or other non-specific complaints unrelated to SCA, while only 4 (3.6%) visits were for pain. There were 38 hospitalizations in 17 participants, mostly for routine evaluation of fever (66%), but no positive blood cultures and no admissions for febrile neutropenia. The average length of hospitalization was 2.8±2.4 days with 81% of participants discharged within 72 hours of admission. There were 3 episodes of acute chest syndrome in 2 patients, two of whom required PRBC transfusion. Including all types of visits, there were only 6 pain events, equivalent to 10.1 pain events per 100 patient-years, which is much lower than the published 94 events per 100 patient-years in the hydroxyurea treatment arm of BABY HUG (Thornburg CD et al. Blood 2012). There were 37 TCD exams performed in 16 participants, all normal except for one patient with conditional velocities that normalized with hydroxyurea. There were no significant differences from baseline to month 12 in either RBC pit counts or cystatin C values. Conclusions: Hydroxyurea initiation at an early age using PK-guided dosing provides significant clinical benefits for young children with sickle cell anemia. These TREAT study data suggest that initiating hydroxyurea around one year of life using a personalized dosing strategy can provide better clinical and laboratory benefits than starting at the conventional 20 mg/kg/day weight-based dose. Very high HbF levels are observed at modest and well-tolerated doses of hydroxyurea, perhaps because treatment was initiated before the process of HbF inactivation is complete. Continued long-term follow-up of these patients will determine whether these will be sustained and able to prevent both short- and long-term complications of SCA. Disclosures Malik: CSL Behring: Patents & Royalties. Quinn:Silver Lake Research Corporation: Research Funding; Global Blood Therapeutics: Research Funding; Amgen: Research Funding. Ware:Biomedomics: Research Funding; Nova Laboratories: Consultancy; Bristol Myers Squibb: Research Funding; Addmedica: Research Funding; Global Blood Therapeutics: Other: advisory board; Agios: Other: advisory board; Novartis: Membership on an entity's Board of Directors or advisory committees.
APA, Harvard, Vancouver, ISO, and other styles
31

Perheentupa, Viljami, Ville Mäkinen, and Juha Oksanen. "Making post-glacial uplift visible: A model based high-resolution animation of shore displacement." Abstracts of the ICA 1 (July 15, 2019): 1–2. http://dx.doi.org/10.5194/ica-abs-1-296-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Glacial isostatic adjustment (GIA) is an ongoing phenomenon that characterizes the landscape of the High Coast (63°04'N, 18°22'E, Sweden) / Kvarken archipelago (63°16'N, 21°10'E, Finland) UNESCO World Heritage site. GIA occurs as the Earth’s crust that was depressed by the continental ice sheet during the last glacial period is slowly rebounding towards isostatic equilibrium. The maximum rate of land uplift in the area is more than eight millimetres per year, which &amp;ndash; along with the very different topographical reliefs of the opposite coasts &amp;ndash; makes the region an excellent study area for land uplift as a phenomenon. As there is a marine area between the coasts, shore displacement is an essential part of the phenomenon in the study area.</p><p>The cartographic representation of GIA and shore displacement has classically relied on static maps representing isobases of the uplift rates and of ancient shorelines. However, to dynamically visualize and communicate the continuity and the nature of the phenomena, an animated map is required. To create a visually balanced, seamless animation, we need to create high-resolution image frames that represent digital elevation models (DEMs) together with extracted shorelines of different moments of time. To create these frames, we developed a mathematical model to transform the DEM in a given time for the past ~9300 years. We used the most recent LiDAR-derived DEMs of Finland and Sweden, and a bathymetric model of the Gulf of Bothnia as our initial data, along with a land uplift rate surface derived from geophysical measurements. We compared the current uplift rates with the shoreline observations of the ancient Baltic Sea stages, Litorina Sea and Ancylus Lake, and created a linear model between the elevations of the shorelines and the present-day uplift rates, as there was a near-linear correlation in both cases. Based on the current uplift rates and the elevations and the dating of the ancient shorelines, we derived an exponential model to describe the non-linear correlation between the elapsed time and the occurred land uplift. Near the present time, we adapted the formula proposed by Ekman (2001) to make the model more robust closer to the present day.</p><p>We assumed that although the uplift rate varies in time, the spatial relation of uplift rates remains the same. Furthermore, as the land uplift is an exponentially decelerating phenomenon occurring with a significantly lower annual rate than shortly after the de-glaciation (Eronen et al. 2001, Nordman et al. 2015), and with most of the total uplift already having occurred (Ekman 1991), we assumed a constant rate of uplift from the present day to the near geological future. We did not consider potential sea level changes caused by human-driven climate change in the predictions, as the geological time scale vastly exceeds the time range of the climate models. Neither did we take into account the historical transgression phases, as they did not appear dominating in the area.</p><p>The elevation and bathymetry data were harmonized and resampled into 4K (3840&amp;thinsp;&amp;times;&amp;thinsp;2160) pixel dimensions to utilize the best commercially available screen resolutions and to avoid unnecessary sub-pixel level computations. This resulted in a spatial pixel size of about 200 metres. The initial spatial resolution of the DEMs of Finland and Sweden was 2 metres and 1 metre, respectively, while the bathymetric data had a spatial pixel size of 400 metres. This, along with the fact that the bathymetric data was partly modelled and inaccurate near the coastlines, meant that it had to be oversampled to generate plausible coastal bathymetry and to allow any future estimations of shore displacement. All the datasets were resampled to EPSG:3857 Pseudo-Mercator projection to facilitate any future use in web map applications. As the visualized area is only about 430 kilometres in the north-south direction, the use of this projection did not introduce cartographic issues.</p><p>The rendered frames required by the animation were produced with a programmatic conversion of raster files to RGB-images. The visualization of shore displacement was implemented by a discontinuity in elevation dependent colour scale at sea level. The bathymetry was visualized with a continuous colour scale in shades of blue until the elevation of zero metres. Elevations above zero were visualized with a colour scale starting from green to create an impression of a discrete shoreline (Figure 1).</p><p>The whole process from computing the DEMs to rendering the frames was implemented in Python, without the need for traditional GUI operated GIS or image processing software. The raster data was read and processed with GDAL and NumPy libraries, and the visualization was carried out using Matplotlib and Python Imaging Library. Each DEM was given the same elevation based colour scale and an individually created hillshading that was blended with the image by multiplication. The whole process was carried out as an open source solution.</p><p>The interval between the calculated frames was set to five years as, particularly at the Swedish coast, the shore displacement can appear abrupt with a longer time interval. The frame duration was set to 0.05 seconds, which means a 100-second duration for an animation of 10&amp;thinsp;000 years.</p><p>The resulting DEM reconstructions show good agreement with comparable data, such as the Litorina reconstructions by the Geological Survey of Finland (GTK). Also, the mathematical model appears to be in line with previous reconstructions conducted in the area (e.g. Nordman et al. 2015). So far, any continuous series of paleogeographic DEM reconstructions comparable to ours has not been published for this area. The animation provides an understandable way of perceiving the continuous but decelerating nature of the land uplift phenomenon and also highlights the differences in the post-glacial history of Finnish and Swedish coasts. To further improve the visualization, we must consider the removal of post-glacially developed features in the present day DEM, e.g. the various rivers that can both cause bias in the shore displacement and uplift estimations and appear visually distractive. In the very early frames of the animation, the retracting ice sheet must also be present. Also, a balanced addition of other cartographic elements, such as present-day hydrography and place names, can further improve the overall presentation.</p>
APA, Harvard, Vancouver, ISO, and other styles
32

Younes, Anas, Connie Lee Batlevi, Jonathon B. Cohen, Sven de Vos, Daniel J. Landsburg, Zaw Win Myint, Krish Patel, et al. "A Multi-Center Dose-Finding Study to Assess Safety, Tolerability, Pharmacokinetics and Preliminary Efficacy of Fimepinostat (CUDC-907) in Combination with Venetoclax in Patients with Relapsed/Refractory (R/R) Lymphoma." Blood 134, Supplement_1 (November 13, 2019): 4104. http://dx.doi.org/10.1182/blood-2019-128857.

Full text
Abstract:
Background High-grade B-cell lymphoma (HGBL) with MYC and BCL2 and/or BCL6 rearrangements (double- and triple-hit lymphoma), as well as diffuse large B-cell lymphoma (DLBCL), NOS with increased expression of MYC and BCL2 protein (double-expressor lymphoma) are associated with a poor prognosis after front-line treatment with standard immunochemotherapy. As such, therapies targeting MYC and BCL2 alterations are urgently needed. Currently, there are no approved therapies that target MYC. Fimepinostat is an investigational small molecule dual inhibitor of phosphotidyl-inositol-3-kinases (PI3Ks) and Class I and II histone deacetylases (HDACs). Both of fimepinostat's dual mechanisms of action lead to decreased MYC protein: PI3K inhibition leads to enhanced ubiquitin-mediated MYC protein degradation, and HDAC inhibition leads to repression of MYC gene expression. PI3K and HDAC were also inhibited by fimepinostat in peripheral blood mononuclear cells collected from patients (pts) receiving fimepinostat therapy. When dosed in combination with venetoclax in non-clinical studies, fimepinostat demonstrated striking synergistic anti-tumor effects in vitro and in vivo with nearly 100% tumor growth inhibition in a double-hit lymphoma mouse xenograft model (Landsburg 2018a). In clinical studies, fimepinostat +/- rituximab was well tolerated with a favorable safety profile in pts with R/R lymphoma, and resulted in robust and durable objective response rates (ORR) in pts with R/R MYC-altered DLBCL with an ORR of 23% and a median duration of response (DOR) of 13.6 months (Landsburg 2018b). Study Design and Methods CUDC-907-101 is a Phase 1/2, multi-center, dose-finding study that was recently amended to evaluate fimepinostat in combination with other anti-cancer therapies, including venetoclax. Cohorts of patients will receive increasing dose levels of fimepinostat administered on a 5-days-on-2-days-off (5/2) schedule in combination with venetoclax in 21-day cycles (Table 1). The primary objectives are to determine the maximum tolerated dose, PK, safety and tolerability, and to assess preliminary efficacy, as measured by the ORR and DOR. Eligible pts must have a histologically-confirmed diagnosis of DLBCL or HGBL with or without MYC and/or BCL2 alterations, which is refractory to, or relapsed after, ≥1 prior lines of therapy. Patients must also have an ECOG performance status of 0 or 1, a life expectancy of ≥3 months, measurable disease per Lugano criteria, and have archived or fresh tumor tissue available. Approximately 12 pts in the Ph 1 dose escalation (3+3 design) and 30 pts in the Ph 2 expansion will be enrolled to receive fimepinostat + venetoclax treatment. Patients will be treated until progression or unacceptable toxicity. The Ph 2 expansion will be an estimation study for detecting an efficacy signal. Patients who receive ≥1 dose and have ≥1 post-baseline response evaluation will be included in the efficacy analysis set. Investigator-assessed ORR based on Lugano criteria will be summarized as the proportion of pts who achieve a best response of CR or PR for each combination, and the corresponding two-sided 95% confidence interval (CI, Clopper-Pearson) will be calculated. DOR will be summarized for pts who achieve response using the Kaplan-Meier (KM) product-limit method. The median DOR along with the two-sided 95% CI using the Brookmeyer and Crowley method will be calculated. PFS and OS will be estimated in pts using the KM product-limit method, along with the median and two-sided 95% CI. The first patient in this study was treated in July 2019, and enrollment is on-going. This new study represents the first clinical trial of the novel-novel combination of fimepinostat with venetoclax in pts with non-Hodgkin lymphoma harboring alterations of both MYC and BCL2. Clinical trial: NCT01742988. References a. Landsburg, D. J. et al., Durable Responses Achieved in Patients with MYC-altered Relapsed/Refractory Diffuse Large B-cell Lymphoma Treated with Fimepinostat (CUDC-907): Combined Results from a Phase 1 and Phase 2 Study. Poster presented at: Society of Hematologic Oncology annual meeting. September 12-15, 2018. b. Landsburg, D. J. et al., (2018). A Pooled Analysis of Relapsed/Refractory Diffuse Large B-Cell Lymphoma Patients Treated with the Dual PI3K and HDAC Inhibitor Fimepinostat (CUDC-907), Including Patients with MYC-Altered Disease. Blood,132(Suppl 1), 4184. Disclosures Younes: Epizyme: Consultancy, Honoraria; Roche: Consultancy, Honoraria, Research Funding; Janssen: Honoraria, Research Funding; AstraZeneca: Research Funding; Genentech: Research Funding; Biopath: Consultancy; Xynomics: Consultancy; Syndax: Research Funding; Curis: Honoraria, Research Funding; Merck: Honoraria, Research Funding; Abbvie: Honoraria; Celgene: Consultancy, Honoraria; HCM: Consultancy; BMS: Research Funding; Pharmacyclics: Research Funding; Takeda: Honoraria. Batlevi:Juno Therapeutics: Consultancy, Membership on an entity's Board of Directors or advisory committees. Cohen:Takeda Pharmaceuticals North America, Inc.: Research Funding; Genentech, Inc.: Consultancy, Research Funding; Bristol-Meyers Squibb Company: Research Funding; Seattle Genetics, Inc.: Consultancy, Research Funding; Gilead/Kite: Consultancy; LAM Therapeutics: Research Funding; UNUM: Research Funding; Hutchison: Research Funding; Astra Zeneca: Research Funding; Lymphoma Research Foundation: Research Funding; Janssen Pharmaceuticals: Consultancy; ASH: Research Funding. de Vos:Verastem: Consultancy; Portola Pharmaceuticals: Membership on an entity's Board of Directors or advisory committees; Bayer: Consultancy. Landsburg:Triphase: Research Funding; Seattle Genetics: Speakers Bureau; Takeda: Research Funding; Celgene: Membership on an entity's Board of Directors or advisory committees; Celgene: Membership on an entity's Board of Directors or advisory committees; Curis, INC: Consultancy, Membership on an entity's Board of Directors or advisory committees, Research Funding; Curis, INC: Consultancy, Membership on an entity's Board of Directors or advisory committees, Research Funding; Seattle Genetics: Speakers Bureau; Triphase: Research Funding; Takeda: Research Funding. Patel:Sunesis: Consultancy; AstraZeneca: Consultancy, Research Funding, Speakers Bureau; Celgene: Consultancy, Speakers Bureau; Genentech: Consultancy, Speakers Bureau; Pharmacyclics/Janssen: Consultancy, Speakers Bureau. Phillips:Celgene: Membership on an entity's Board of Directors or advisory committees; Abbvie: Research Funding; Pharmacyclics: Consultancy, Research Funding; Genentech: Consultancy; Incyte: Membership on an entity's Board of Directors or advisory committees; Gilead: Consultancy; Seattle Genetics: Consultancy; Bayer: Consultancy. Smith:Portola Pharmaceuticals: Research Funding. Westin:Novartis: Other: Advisory Board, Research Funding; 47 Inc: Research Funding; MorphoSys: Other: Advisory Board; Kite: Other: Advisory Board, Research Funding; Genentech: Other: Advisory Board, Research Funding; Curis: Other: Advisory Board, Research Funding; Juno: Other: Advisory Board; Unum: Research Funding; Celgene: Other: Advisory Board, Research Funding; Janssen: Other: Advisory Board, Research Funding. Ma:Curis, Inc.: Employment. Grayson:Curis, Inc.: Employment. von Roemeling:Curis, Inc.: Employment. Barta:Takeda: Research Funding; Janssen: Membership on an entity's Board of Directors or advisory committees; Celgene: Research Funding; Mundipharma: Honoraria; Seattle Genetics: Honoraria, Research Funding; Bayer: Consultancy, Research Funding; Mundipharma: Honoraria; Merck: Research Funding; Celgene: Research Funding; Janssen: Membership on an entity's Board of Directors or advisory committees.
APA, Harvard, Vancouver, ISO, and other styles
33

Sørensen, M., S. Kristensen, K. B. Lauridsen, K. Duch, L. Dreyer, R. Christensen, E. M. Hauge, et al. "POS0248 EXPLORING TUMOUR NECROSIS FACTOR INHIBITOR DRUG LEVELS DURING DISEASE ACTIVITY-GUIDED TAPERING IN PATIENTS WITH INFLAMMATORY ARTHRITIS: SECONDARY ANALYSES FROM THE BIODOPT TRIAL." Annals of the Rheumatic Diseases 82, Suppl 1 (May 30, 2023): 359–60. http://dx.doi.org/10.1136/annrheumdis-2023-eular.1928.

Full text
Abstract:
BackgroundFluctuations in tumour necrosis factor inhibitor (TNFi) drug levels in patients with inflammatory arthritis (IA) during disease activity-guided tapering is not well described.ObjectivesTo compare TNFi drug-levels in the tapering group, relative to the control group, from baseline to month 18 based on data from the BIODOPT trial.MethodsBIODOPT was designed as a pragmatic, multicentre, randomised controlled, open-label trial (EudraCT 2017-001970-41) of 18 months duration [1]. Patients with rheumatoid arthritis (RA), psoriatic arthritis (PsA) and axial spondyloarthritis (axSpA) in sustained (≥12 month) low disease activity (LDA) and treated with TNFi at baseline were enrolled and randomised into a disease activity-guided tapering or control in a 2:1 ratio. Blood samples at baseline and month 18 were analysed for TNFi drug levels. Based on previous research, the TNFi drug level category was considered intermediate if values were between: adalimumab 5.0-8.0 mg/L, certolizumab pegol 14.7-40.0 mg/L, etanercept 1.8-4.6 mg/L, golimumab 1.0-3.0 mg/L, or infliximab 1.6-5.0 mg/L. Values greater or lesser were considered as high or low TNFi category, respectively. A mixed Poisson regression with robust variance estimator was used for the analyses on TNFi categories; missing data were imputed as low TNFi category.ResultsOf 129 TNFi-treated patients, 88 were randomised to tapering and 41 to a control arm with standard care. Blood samples at baseline and month 18 were available for: tapering group 89% (78/88) and control group 90% (37/41). As expected, baseline TNFi categories were comparable between groups as presented inFigure 1. At 18 months, fewer patients in the tapering group were in the high TNFi category, relative risk: RR: 0.53 (95%CI 0.31 to 0.90,P=0.02),Table 1. Even though more patients in the tapering group were in the low TNFi category at 18 months, the difference was non-significant, RR: 1.47 (95%CI: 0.94 to 2.32, P=0.09). The observed changes in TNFi categories between groups from baseline to 18 months indicates acceptable compliance to the trial interventions. At 18 months, 32% (28/88) in the tapering group and 0% (0/41) in the control group had achieved ≥50% dose reduction of their TNFi and were in LDA. The majority of patients in the tapering group were in the high (39% [11/28]) or intermediate (39% [11/28]) TNFi category at baseline; only 22% (6/28) were in the low TNFi category. Thus, indicating a greater chance of successful tapering for patients with high or intermediate TNFi drug levels at baseline.ConclusionSuccessful TNFi tapering was associated with high or intermediate TNFi levels at baseline. Fewer patients in the tapering group were in the high TNFi category at 18 months; thus, indicating acceptable compliance to the trial interventions. Further research is needed for the implications of therapeutic drug monitoring in the management of rheumatic diseases.Reference[1]Uhrenholt L, Christensen R, Dreyer L et al. Disease activity-guided tapering of biologics in patients with inflammatory arthritis: A pragmatic, randomised, open-label, equivalence trial. Scand J Rheumatol 2023 (Accepted for publication)Table 1.TNFi drug level categories at 18 months.Variable18 monthsTapering groupN=88Control groupN=41Between group difference RR (95%CI)TNFi drug level category:High, n (%)19 (22%)17 (42%)0.53 (0.31 to 0.90)Intermediate, n (%)25 (28%)10 (24%)1.12 (0.60 to 2.09)Low, n (%)44 (50%)14 (34%)1.47 (0.94 to 2.32)N: number, RR: relative risk, 95%CI: 95% confidence interval, TNFI: tumour-necrosis factor inhibitor.AcknowledgementsThe authors thank patients and research personnel who contributed to the BIODOPT trial.Disclosure of InterestsMads Sørensen: None declared, Salome Kristensen: None declared, Karen Buch Lauridsen Speakers bureau: Thermo Fisher Scientific, Kirsten Duch: None declared, Lene Dreyer Speakers bureau: Eli Lilly, Galderma, and Janssen, Grant/research support from: BMS (outside the present work), Robin Christensen: None declared, Ellen-Margrethe Hauge Speakers bureau: AbbVie, Sanofi, Sobi, and SynACT Pharma, Grant/research support from: Aarhus University Hospital from Danish Regions Medicine Grants, Danish Rheumatism Association, Roche, Novartis, and Novo Nordic Foundation, Anne Gitte Loft Speakers bureau: AbbVie, MSD, Novartis and UCB, Consultant of: Eli-Lilly, Janssen-Cilag, MSD, Novartis, and UCB, Mads Nyhuus Bendix Rasch Speakers bureau: Sobi, Hans Christian Horn: None declared, Peter C. Taylor Consultant of: AbbVie, Biogen, Eli-Lilly, Fresenius, Galapagos, Gilead Sciences, GlaxoSmithKline, Janssen, Nordic Pharma, Pfizer Inc, Roche, and Sanofi, Grant/research support from: Galapagos, Kaspar René Nielsen: None declared, Line Uhrenholt Speakers bureau: AbbVie, Eli-Lilly, Janssen, and Novarti.s.
APA, Harvard, Vancouver, ISO, and other styles
34

Kunitomo, Naoto, and Seisho Sato. "A robust-filtering method for noisy non-stationary multivariate time series with econometric applications." Japanese Journal of Statistics and Data Science, January 4, 2021. http://dx.doi.org/10.1007/s42081-020-00102-y.

Full text
Abstract:
AbstractWe investigate a new filtering method to estimate the hidden states of random variables for multiple non-stationary time series data. This helps in analyzing small sample non-stationary macro-economic time series in particular and it is based on the frequency domain application of the separating information maximum likelihood (SIML) method, developed by Kunitomo et al. (Separating Information Maximum Likelihood Estimation for High Frequency Financial Data. Springer, New York, 2018), and Kunitomo et al. (Japan J Statistics Data Sci 2:73–101, 2020), and Nishimura et al. (Asic-Pacific Financial Markets, 2019). We solve the filtering problem of hidden random variables of trend-cycle, seasonal and measurement-errors components, and propose a method to handle macro-economic time series. We develop the asymptotic theory based on the frequency domain analysis for non-stationary time series. We illustrate applications, including some properties of the method of Müller and Watson (Econometrica 86-3:775–804, 2018), and analyses of some macro-economic data in Japan.
APA, Harvard, Vancouver, ISO, and other styles
35

Hassine, Maatoug, and Rakia Malek. "Topological asymptotic formula for the 3D non-stationary Stokes problem and application." Revue Africaine de la Recherche en Informatique et Mathématiques Appliquées Volume 32 - 2019 - 2020 (October 22, 2020). http://dx.doi.org/10.46298/arima.4760.

Full text
Abstract:
International audience This paper is concerned with a topological asymptotic expansion for a parabolic operator. We consider the three dimensional non-stationary Stokes system as a model problem and we derive a sensitivity analysis with respect to the creation of a small Dirich-let geometric perturbation. The established asymptotic expansion valid for a large class of shape functions. The proposed analysis is based on a preliminary estimate describing the velocity field perturbation caused by the presence of a small obstacle in the fluid flow domain. The obtained theoretical results are used to built a fast and accurate detection algorithm. Some numerical examples issued from a lake oxygenation problem show the efficiency of the proposed approach. Ce papier porte sur l'analyse de sensibilité topologique pour un opérateur parabolique. On considère le problème de Stokes instationnaire comme un exemple de modèle et on donne une étude de sensibilité décrivant le comportement asymptotique de l'opérateur relativement à une petite perturbation géométrique du domaine. L'analyse présentée est basée sur une estimation du champ de vitesse calculée dans le domaine perturbé. Les résultats de cette étude ont servi de base pour développer un algorithme d'identification géométrique. Pour la validation de notre approche, on donne une étude numérique pour un problème d'optimisation d'emplacement des injecteurs dans un lac eutrophe. Des exemples numériques montrent l'efficacité de la méthode proposée
APA, Harvard, Vancouver, ISO, and other styles
36

Martínez-Camblor, Pablo, Todd A. MacKenzie, and A. James O’Malley. "Estimating population-averaged hazard ratios in the presence of unmeasured confounding." International Journal of Biostatistics, March 23, 2022. http://dx.doi.org/10.1515/ijb-2021-0096.

Full text
Abstract:
Abstract The Cox regression model and its associated hazard ratio (HR) are frequently used for summarizing the effect of treatments on time to event outcomes. However, the HR’s interpretation strongly depends on the assumed underlying survival model. The challenge of interpreting the HR has been the focus of a number of recent papers. Several alternative measures have been proposed in order to deal with these concerns. The marginal Cox regression models include an identifiable hazard ratio without individual but populational causal interpretation. In this work, we study the properties of one particular marginal Cox regression model and consider its estimation in the presence of omitted confounder from an instrumental variable-based procedure. We prove the large sample consistency of an estimation score which allows non-binary treatments. Our Monte Carlo simulations suggest that finite sample behavior of the procedure is adequate. The studied estimator is more robust than its competitor (Wang et al.) for weak instruments although it is slightly more biased for large effects of the treatment. The practical use of the presented techniques is illustrated through a real practical example using data from the vascular quality initiative registry. The used R code is provided as Supplementary material.
APA, Harvard, Vancouver, ISO, and other styles
37

Epley, Benjamin. "Digest: Few new mutations are recessive lethal." Evolution, June 24, 2023. http://dx.doi.org/10.1093/evolut/qpad117.

Full text
Abstract:
Abstract When a new mutation arises, what is the probability that it is recessive lethal? Wade et al. (2023) find that fewer than 1% of nonsynonymous mutations in humans and Drosophila melanogaster are recessive lethal. The authors show that methods based on site frequency spectrum (SFS) analyses, though generally robust in their estimations of the non-lethal distribution of fitness effects (DFE), are unable to accurately estimate the fraction of recessive lethal mutations.
APA, Harvard, Vancouver, ISO, and other styles
38

Bhutada, Abhishek S., Chang Cai, Danielle Mizuiri, Anne Findlay, Jessie Chen, Ashley Tay, Heidi E. Kirsch, and Srikantan S. Nagarajan. "Clinical Validation of the Champagne Algorithm for Evoked Response Source Localization in Magnetoencephalography." Brain Topography, June 11, 2021. http://dx.doi.org/10.1007/s10548-021-00850-4.

Full text
Abstract:
AbstractMagnetoencephalography (MEG) is a robust method for non-invasive functional brain mapping of sensory cortices due to its exceptional spatial and temporal resolution. The clinical standard for MEG source localization of functional landmarks from sensory evoked responses is the equivalent current dipole (ECD) localization algorithm, known to be sensitive to initialization, noise, and manual choice of the number of dipoles. Recently many automated and robust algorithms have been developed, including the Champagne algorithm, an empirical Bayesian algorithm, with powerful abilities for MEG source reconstruction and time course estimation (Wipf et al. 2010; Owen et al. 2012). Here, we evaluate automated Champagne performance in a clinical population of tumor patients where there was minimal failure in localizing sensory evoked responses using the clinical standard, ECD localization algorithm. MEG data of auditory evoked potentials and somatosensory evoked potentials from 21 brain tumor patients were analyzed using Champagne, and these results were compared with equivalent current dipole (ECD) fit. Across both somatosensory and auditory evoked field localization, we found there was a strong agreement between Champagne and ECD localizations in all cases. Given resolution of 8mm voxel size, peak source localizations from Champagne were below 10mm of ECD peak source localization. The Champagne algorithm provides a robust and automated alternative to manual ECD fits for clinical localization of sensory evoked potentials and can contribute to improved clinical MEG data processing workflows.
APA, Harvard, Vancouver, ISO, and other styles
39

C. D. Mendi and E. S. Husebye. "ear real time estimation of magnitudes and moments for local seismic events." Annals of Geophysics 37, no. 3 (June 18, 1994). http://dx.doi.org/10.4401/ag-4218.

Full text
Abstract:
The general Popularity of magnitude as a convenient and robust measure of earthquake size makes it tempting to examine whether this parameter can be reliably estimated in near real time. In this study we dernonstrate that this is indeed the case conditioned on the design of the signal detector being of STA/LTA type where STA is a short term signal power or rms estimate. Using real data we dernonstrate the Random Vibration Theory relation that Amax (21nN) 1/2 Arms , is valid for non stationary seismic signals. Using Rayleigh's theorem we also estabmlaixshed a relation brmetween Arms and the flat portion of the source spectra. These Amax and Arms estimation procedures are used for determining conventional magnitudes and moment magnitudes for 29 events as recorded by the Norwegian Seismograph Network (NSN). We used here a procedure outlined by Sereno et al. (1988) and also their geometrical spreading and attenuation parameters derived from analysis of NORSAR recordings. Our magnitude and moment magnitude estimates for 5 different frequency bands are in good agreement with the ML estimates derived from the conventional magnitude formulas in combination with empirical correction tables. Surprisingly, the Amax and Arms magnitudes produced consistent negative biased by ca. 0.4 units estimates even in the extreme 4 8 Hz band. In view of the good agreement between various types of magnitude estimates, we constructed conventional magnitude correction tables spreading and attenuation parameters from Sereno et al (1988) for a variety of signal frequency bands. Near real time Amax ad/or Arms or correspondingly event magnitudes would be of significance in automatic phase association analysis, bulletin production for local and regional seismic networks and the earthquakes monitoring performances of such networks.
APA, Harvard, Vancouver, ISO, and other styles
40

Chimote, B. N., and N. Chimote. "P-681 Balance of Estrogen:Androgen ratio (Estradiol E2: Dehydroepiandrosterone-sulphate DHEAS) and its symbiotic correlationship with Vascular-Endothelial Growth-Factor (VEGF) critically foments implantation and pregnancy outcomes in IVF cycles." Human Reproduction 37, Supplement_1 (June 29, 2022). http://dx.doi.org/10.1093/humrep/deac107.630.

Full text
Abstract:
Abstract Study question Is the estrogen/androgen balance vital for implantation and does this E2:DHEAS ratio coordinate with neoangiogenesis/neovascularisation (VEGF) to facilitate establishment of viable pregnancy in IVF cycles? Summary answer The Estradiol E2:DHEAS ratio independently as well as in symbiotic correlationship with VEGF is strongly predictive of invasive implantation and pregnancy outcomes in IVF cycles What is known already Decidualization of endometrial stroma and vascularization at feto-maternal interface is necessary for implantation. Estrogen Estradiol E2 is among the factors widely known to regulate these processes. Androgen Dehydroepiandrosterone (DHEA) is the designated primary precursor for natural estrogens. DHEA-induced PCOS mice have demonstrated adverse impacts on Embryo implantation and decidualization. However, role of DHEA/S in human implantation has not yet been explored. Vascular-Endothelial growth-factor (VEGF) is known to influence vascularization, angiogenesis, endometrial receptivity. Higher VEGF expression in endometrium is associated with pregnancy. However endometrial-biopsy for VEGF estimation is an invasive method for implantation studies and cannot be applied to ongoing cycles. Study design, size, duration Prospective study of n = 235 (Power of study&gt;85%) non-PCOS, normal-responder infertile females (mean age 31.77±2.42 years, BMI 24.4±4.1, W/H ratio 0.82±0.07) undergoing fresh IVF treatment cycles using standard antagonist stimulation protocol at our fertility clinic during January 2019-December 2020. Elderly women (age&gt;35 years), women with polycystic-ovaries, endometriosis and frozen-thawed cycles were excluded.7 cycles were cancelled owing to no embryo transfer (ET). Day3-cleavage stage or day5-blastocyst stage embryos were transferred (n = 228). Luteal-phase was supported with Micronized progesterone. Participants/materials, setting, methods Serum levels of Estradiol E2 and DHEAS were measured by Radio-Immunoassay and VEGF estimation by ELISA on day ET, day7, day14 post ET. Cycles were divided into Pregnant (P:n=73) and Non-Pregnant (NP=n=155) groups. Main outcome measures: Clinical Pregnancy (CPR:n=70=30.70%), Live-Birth (LBR:28.94%) rates. Secondary outcome measures: Biochemical-pregnancy (BCP:n=3=1.31%), Early-Miscarriage (EM:n=4=1.75%) rates. Pregnant=Serum βhCG&gt;50 on d14ET. BCP=No exponential rise/tapering βhCG levels post d14ET, CP=positive cardiac activity at 6-8 weeks gestation, EM=non-viable intra-uterine pregnancy upto 12 weeks gestation. Main results and the role of chance dET: E2 did not differ significantly (780±50 vs. 750±30 pg/ml;p=0.61) whereas DHEAS = (212±11 vs. 170±6 ng/dL;p=0.0005), E2:DHEAS = (3.72±0.13 vs. 4.65±0.17;p=0.0011), VEGF = (658±20 vs. 495±16;p&lt;0.0001) differed significantly between P vs. NP groups; indicating importance of E2/DHEAS ratio. D7ET: E2 = (790±57 vs. 222±19;p&lt;0.0001), DHEAS = (381±28 vs. 1262±70;p&lt;0.0001), E2:DHEAS = (2.37±0.25 vs. 0.30±0.029;p&lt;0.0001), VEGF = (636±23.61 vs. 480±10.13;p&lt;0.0001) displayed statistically robust differences between P vs. NP groups. d14ET: E2 = (855±73.45 vs. 193.2±13.31;p&lt;0.0001), VEGF = (544.2±15.80 vs. 452.3±16.77;p=0.0011) displayed statistically robust differences between P vs. NP groups. Interestingly, although DHEAS = (685±41.05 vs. 806±55.37;p=0.18) did not differ significantly, E2:DHEAS = (1.643±0.26 vs. 0.45±0.048;p&lt;0.0001) showed highly significant difference; again highlighting importance of E2/DHEAS balance. Ratio E2:DHEAS on d7ET correlated strongly with d7VEGF (Pearson r = 0.31,p&lt;0.0001). ROC curve for d7 E2:DHEAS had AUC 78%, p &lt; 0.0001 Hormone values and their rising trends (d/ET to d7/ET to d14/ET) varied with pregnancy outcome status E2= LB:(d/ET:780±50 to d7/ET:790±57 to d14/ET:855±73.45) vs. BCP:(d/ET:723.3±157.6 to d7/ET:743.3±159 to d14/ET:770±153) vs. EM:(d/ET:906.3±104 to d7/ET:1150±113 to d14/ET:1758±127) DHEAS= LB:(d/ET:212±11 to d7/ET:381±28 to d14/ET:685±41.05) vs. BCP:(d/ET:248.3±55 to d7/ET:266.7±71.2 to d14/ET:400±120.3) vs. EM:(d/ET:243.8±27 to d7/ET:303.8±11.43 to d14/ET:391.3±49) E2:DHEAS= LB:(d/ET:3.72±0.13 to d7/ET:2.37±0.25 to d14/ET:1.643±0.26) vs. BCP:(d/ET:2.93±0.15 to d7/ET:2.97±0.4 to d14/ET:2.12±0.32) vs. EM:(d/ET:3.78±0.40 to d7/ET:3.77±0.26 to d14/ET:4.68±0.59) Decreasing E2:DHEAS ratio (d/ET-d7/ET-d14/ET) is vital for successful implantation/Live-birth VEGF= LB:(d/ET:658±20 to d7/ET:636±23.61 to d14/ET:544.2±15.80) vs. BCP:(d/ET:513.3±37.56 to d7/ET:628.3±40.45 to d14/ET:468.3±35.63) vs. EM:(d/ET:647.5±78.8 to d7/ET:887.5±60 to d14/ET:1138±94.4) Limitations, reasons for caution This study is limited by small sample size and is not a randomized controlled trial. We have taken only representative molecules into consideration. There is a plethora of factors involved during the implantation process. Multi-centric systematic studies are needed to corroborate these findings and account for the probable confounding factors. Wider implications of the findings This novel study explores and highlights significance of maintenance of a critical estrogen/androgen ratio (E2:DHEAS), rather than individual estrogen or androgen markers, in the implantation process. We have also underscored how this ratio coordinates symbiotically with VEGF as a critical non-invasive determinant of implantation and pregnancy-outcomes in ongoing IVF cycles. Trial registration number Not Applicable
APA, Harvard, Vancouver, ISO, and other styles
41

Jin, Xiaoyan, Sultan Sikandar Mirza, Chengming Huang, and Chengwei Zhang. "Digital transformation and governance heterogeneity as determinants of CSR disclosure: insights from Chinese A-share companies." Corporate Governance: The International Journal of Business in Society, March 29, 2024. http://dx.doi.org/10.1108/cg-04-2023-0173.

Full text
Abstract:
Purpose In this fast-changing world, digitization has become crucial to organizations, allowing decision-makers to alter corporate processes. Companies with a higher corporate social responsibility (CSR) level not only help encourage employees to focus on their goals, but they also show that they take their social responsibility seriously, which is increasingly important in today’s digital economy. So, this study aims to examine the relationship between digital transformation and CSR disclosure of Chinese A-share companies. Furthermore, this research investigates the moderating impact of governance heterogeneity, including CEO power and corporate internal control (INT) mechanisms. Design/methodology/approach This study used fixed effect estimation with robust standard errors to examine the relationship between digital transformation and CSR disclosure and the moderating effect of governance heterogeneity among Chinese A-share companies from 2010 to 2020. The whole sample consists of 17,266 firms, including 5,038 state-owned enterprise (SOE) company records and 12,228 non-SOE records. The whole sample data is collected from the China Stock Market and Accounting Research, the Chinese Research Data Services and the WIND databases. Findings The regression results lead us to three conclusions after classifying the sample into non-SOE and SOE groups. First, Chinese A-share businesses with greater levels of digitalization have lower CSR disclosures. Both SOE and non-SOE are consistent with these findings. Second, increasing CEO authority creates a more centralized company decision-making structure (Breuer et al., 2022; Freire, 2019), which improves the negative association between digitalization and CSR disclosure. These conclusions, however, also apply to non-SOE. Finally, INT reinforces the association between corporate digitization and CSR disclosure, which is especially obvious in SOEs. These findings are robust to alternative HEXUN CSR disclosure index. Heterogeneity analysis shows that the negative relationship between corporate digitalization and CSR disclosures is more pronounced in bigger, highly levered and highly financialized firms. Originality/value Digitalization and CSR disclosure are well studied, but few have examined their interactions from a governance heterogeneity perspective in China. Practitioners and policymakers may use these insights to help business owners implement suitable digital policies for firm development from diverse business perspectives.
APA, Harvard, Vancouver, ISO, and other styles
42

Mazet, Nathan, Hélène Morlon, Pierre‐Henri Fabre, and Fabien L. Condamine. "Estimating clade‐specific diversification rates and palaeodiversity dynamics from reconstructed phylogenies." Methods in Ecology and Evolution, August 16, 2023. http://dx.doi.org/10.1111/2041-210x.14195.

Full text
Abstract:
Abstract Understanding palaeodiversity dynamics through time and space is a central goal of macroevolution. Estimating palaeodiversity dynamics has been historically addressed with fossil data because it directly reflects the past variations of biodiversity. Unfortunately, some groups or regions lack a good fossil record, and dated phylogenies can be useful to estimate diversification dynamics. Recent methodological developments have unlocked the possibility to investigate palaeodiversity dynamics by using phylogenetic birth‐death models with non‐homogeneous rates through time and across clades. One of them seems particularly promising to detect clades whose diversity has declined through time. However, empirical applications of the method have been hampered by the lack of a robust, accessible implementation of the whole procedure, therefore requiring users to conduct all the steps of the analysis by hand in a time‐consuming and error‐prone way. Here we propose an automation of Morlon et al. (2011) clade‐shift model with additional features accounting for recent developments, and we implement it in the R package RPANDA. We also test the approach with simulations focusing on its ability to detect shifts of diversification and to infer palaeodiversity dynamics. Finally, we illustrate the automation by investigating the palaeodiversity dynamics of Cetacea, Vangidae, Parnassiinae and Cycadales. Simulations showed that we accurately detected shifts of diversification although false shift detections were higher for time‐dependent diversification models with extinction. The median global error of palaeodiversity dynamics estimated with the automated model is low, showing that the method can capture diversity declines. We detected shifts of diversification for three of the four empirical examples considered (Cetacea, Parnassiinae and Cycadales). Our analyses unveil a waxing‐and‐waning pattern due to a phase of negative net diversification rate embedded in the trees after isolating recent radiations. Our work makes it possible to easily apply non‐homogeneous models of diversification in which rates can vary through time and across clades to reconstruct palaeodiversity dynamics. By doing so, we detected palaeodiversity declines among three of the four groups tested, highlighting that such periods of negative net diversification might be common. We discuss the extent to which this approach might provide reliable estimates of extinction rates, and we provide guidelines for users.
APA, Harvard, Vancouver, ISO, and other styles
43

Chletsos, Michael, and Andreas Sintos. "The effects of IMF programs on income inequality: a semi-parametric treatment effects approach." International Journal of Development Issues, April 25, 2022. http://dx.doi.org/10.1108/ijdi-12-2021-0265.

Full text
Abstract:
Purpose This paper aims to provide new insights regarding the impact of International Monetary Fund (IMF) programs on income inequality. Design/methodology/approach The paper uses a novel methodological approach proposed by Acemoglu et al. (2019), using (1) the regression adjustment, (2) the inverse probability weighting and (3) the doubly robust estimator, which combines (1) and (2), and a sample of annual data for 135 developing countries over the time period 1970 to 2015. Findings The findings show that IMF programs are associated with greater income inequality for up to five years. By differentiating the effect of IMF programs, the authors find that only IMF non-concessional programs have a significant detrimental effect on income inequality, while IMF concessional programs do not have a consistent effect on income inequality. In addition, the authors find that only IMF programs with a higher number of conditions have a detrimental and statistically significant effect on income inequality, compared to IMF programs with a smaller number of conditions, where their effect on income inequality is found to be insignificant. Originality/value To the best of the authors’ knowledge, the analysis developed in this paper contributes to the existing literature by applying the most methodologically sound identification strategy, which does not rely on the linearity assumption, the selection of instruments or matching variables and additionally takes into account the selection bias related to IMF program participation.
APA, Harvard, Vancouver, ISO, and other styles
44

Tosun, Tayfun Tuncay. "Re-analysis of the EU public debt crises with NARX." Pressacademia, January 31, 2023. http://dx.doi.org/10.17261/pressacademia.2023.1702.

Full text
Abstract:
Purpose- This paper employs the public debt equation of motion, which covers variables that represent a country's competitiveness, such as past public debt, GDP, external balance, real exchange rate, real interest, and inflation, to estimate the public debt of Southern EU countries (Greece, Ireland, Italy, Portugal, and Spain). The paper is designed to test whether the public debt equation of motion (see Croce and Ramon, 2003; IMF, 2013; Chirwa and Odhiambo, 2018), which is characterized by significant variables representing competitiveness in macroeconomics, can statistically account for the public debt of Southern EU countries after the monetary union period including the EU public debt crisis. Consequently, based on the findings, it will be determined whether the competitiveness problems of Southern EU countries are important in the EU public debt crisis. Methodology- The analysis is performed with the nonlinear autoregressive network with exogenous inputs (NARX) with quarterly data for the period from 2005Q1 to 2021Q4. In NARX, which is a dynamic non-parametric neural network used in time series analysis, the prediction performance of the model is more robust than other neural network models, as the gradient descent approaches the local minimum perfectly (see Lin et al., 1996; Gao and Er, 2005; Diaconescu, 2008). However, it is important to define the parameters correctly in NARX to obtain effective results. In the study, parameters are defined according to the minimum Mean Squared Error values. The feedback Levenberg-Marquardt (LM) algorithm, which produces fast and effective results, is used as the training algorithm. The performance of the training algorithm for robustness is compared with testing and validation. Findings- The analysis results reveal that public debt in Southern EU countries is statistically explained by the public debt equation of motion with a confidence ratio of over 95%. Conclusion- This result implies that the public debt problem in Southern EU countries is associated with their competitiveness (see also Hall and Soskice, 2001; Dallago and Guglielmetti, 2011; Hall, 2012; Lane, 2012; Gros, 2012; Iversen et al., 2016; De Ville and Vermeiven, 2016; Frieden and Walter, 2017). In addition, the analysis goes beyond parametric analyzes that relate economic growth or a few variables with public debt and reveals the importance of inclusive variables and non-parametric analyzes in the estimation of public debt. Keywords: EU public debt crises, Southern EU countries, NARX, competitiveness problems JEL Codes: C45, F35, F45, N14, N24
APA, Harvard, Vancouver, ISO, and other styles
45

Canali, D. G., A. C. Lopes, L. M. Vieira, B. d. Santos, C. d. Sabino, C. M. Dias, and G. Zampieri. "B-152 Estimation of Creatinine Reference Interval (RI) by the Indirect Method using an Unsupervised Machine Learning Tool and R Programming Language." Clinical Chemistry 69, Supplement_1 (September 27, 2023). http://dx.doi.org/10.1093/clinchem/hvad097.485.

Full text
Abstract:
Abstract Background Creatinine is widely used in diagnostic medicine as a primary renal marker. Clinical laboratory measurement of serum creatinine is not standardized, and the RI varies slightly between different manufacturers. Roche brings as reference ranges 0.5–0.9 mg/DL for women and 0.7–1.2 mg/dL for men. After clinical manifestations of elevated female results without clinical correspondence, the laboratory proceeded with reevaluation of RI. Methods We retrospectively studied the creatinine results of the laboratory database of samples run from January to October 2022 in two different technical areas in Brazil (São Paulo and Rio de Janeiro). The laboratory platform was Roche® 502/702, Jaffe method to creatinine. The LabRI tool was used with parametric, non-parametric, and robust statistical treatments to exclude outliers and algorithms in the R Language and latent abnormal values of outpatient patients. The partitions proposed were gender (male and female), age (18–60 years old and over 60 years old). The proposed RI for verification were those proposed by Rosenfeld, LG et al in REV BRAS EPIDEMIOL 2019; 22 (SUPPL2):E190002.SUPL.2 - F: 0.5–0.9 and M: 0.7–1.2. Exclusion criteria were patients with urea &gt;48.5 mg/dL, glycemia &gt;200 mg/dL, 24 h proteinuria &gt;150 mg/24 h, LDL &gt; 130 mg/dL, uric acid &gt; 130 mg/dL, anti-nucleus positive factor, as well as clinical data compatible with chronic renal failure or dialysis, hypertension and diabetes. Results After exclusion, retrieved 225 181 individuals 52% female. Between the different sites, the results of the partitions did not undergo significant changes. For women in both age partitions, the values did not vary significantly. For men, there was no significant variation in the lower limit of IR, and we had a variation of 0.20 (18–60: 0.69–1.31 and &gt;60: 0.69–1.41) in the upper limit, mainly for the age partition &gt;60 years old, below the 95% confidence interval of the proposed references. Conclusions The adoption of the Roche IR and the Brazilian article was approved for the female gender (0.5–0.9) with 118 100 individuals, and a new IR was defined for the male gender (0.7–1.2) with 107 081 adult individuals, including the elderly over 60 years old, for both genders.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography