Letteratura scientifica selezionata sul tema "Unmeasured confounders"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Unmeasured confounders".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Unmeasured confounders"

1

Burne, Rebecca M., e Michal Abrahamowicz. "Adjustment for time-dependent unmeasured confounders in marginal structural Cox models using validation sample data". Statistical Methods in Medical Research 28, n. 2 (24 agosto 2017): 357–71. http://dx.doi.org/10.1177/0962280217726800.

Testo completo
Abstract (sommario):
Large databases used in observational studies of drug safety often lack information on important confounders. The resulting unmeasured confounding bias may be avoided by using additional confounder information, frequently available in smaller clinical “validation samples”. Yet, no existing method that uses such validation samples is able to deal with unmeasured time-varying variables acting as both confounders and possible mediators of the treatment effect. We propose and compare alternative methods which control for confounders measured only in a validation sample within marginal structural Cox models. Each method corrects the time-varying inverse probability of treatment weights for all subject-by-time observations using either regression calibration of the propensity score, or multiple imputation of unmeasured confounders. Two proposed methods rely on martingale residuals from a Cox model that includes only confounders fully measured in the large database, to correct inverse probability of treatment weight for imputed values of unmeasured confounders. Simulation demonstrates that martingale residual-based methods systematically reduce confounding bias over naïve methods, with multiple imputation including the martingale residual yielding, on average, the best overall accuracy. We apply martingale residual-based imputation to re-assess the potential risk of drug-induced hypoglycemia in diabetic patients, where an important laboratory test is repeatedly measured only in a small sub-cohort.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Handorf, Elizabeth A., Daniel F. Heitjan, Justin E. Bekelman e Nandita Mitra. "Estimating cost-effectiveness from claims and registry data with measured and unmeasured confounders". Statistical Methods in Medical Research 28, n. 7 (22 febbraio 2018): 2227–42. http://dx.doi.org/10.1177/0962280218759137.

Testo completo
Abstract (sommario):
The analysis of observational data to determine the cost-effectiveness of medical treatments is complicated by the need to account for skewness, censoring, and the effects of measured and unmeasured confounders. We quantify cost-effectiveness as the Net Monetary Benefit (NMB), a linear combination of the treatment effects on cost and effectiveness that denominates utility in monetary terms. We propose a parametric estimation approach that describes cost with a Gamma generalized linear model and survival time (the canonical effectiveness variable) with a Weibull accelerated failure time model. To account for correlation between cost and survival, we propose a bootstrap procedure to compute confidence intervals for NMB. To examine sensitivity to unmeasured confounders, we derive simple approximate relationships between naïve parameters, assuming only measured confounders, and the values those parameters would take if there was further adjustment for a single unmeasured confounder with a specified distribution. A simulation study shows that the method returns accurate estimates for treatment effects on cost, survival, and NMB under the assumed model. We apply our method to compare two treatments for Stage II/III bladder cancer, concluding that the NMB is sensitive to hypothesized unmeasured confounders that represent smoking status and personal income.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Rodday, Angie Mae, Theresa Hahn, Peter K. Lindenauer e Susan K. Parsons. "67409 Quantifying Unmeasured Confounding in Relationship between Treatment Intensity and Outcomes among Older Patients with Hodgkin Lymphoma (HL) using Surveillance, Epidemiology and End Results (SEER)-Medicare Data". Journal of Clinical and Translational Science 5, s1 (marzo 2021): 49–50. http://dx.doi.org/10.1017/cts.2021.531.

Testo completo
Abstract (sommario):
ABSTRACT IMPACT: E-values can help quantify the amount of unmeasured confounded necessary to fully explain away a relationship between treatment and outcomes in observational data. OBJECTIVES/GOALS: Older patients with HL have worse outcomes than younger patients, which may reflect treatment choice (e.g., fewer chemotherapy cycles). We studied the relationship between treatment intensity and 3-year overall survival (OS) in SEER-Medicare. We calculated an E-value to quantify the unmeasured confounding needed to explain away any relationship. METHODS/STUDY POPULATION: This retrospective cohort study of SEER-Medicare data from 1999-2016 included 1131 patients diagnosed with advanced stage HL at age ≥65 years. Treatment was categorized as: (1) full chemotherapy regimens (‘full regimen’, n=689); (2) partial chemotherapy regimen (‘partial regimen’, n=175); (3) single chemotherapy agent or radiotherapy (‘single agent/RT’, n=102), or (4) no treatment (n=165). A multivariable Cox regression model estimated the relationship between treatment and 3-year OS, adjusting for disease and patient factors. An E-value was computed to quantify the minimum strength of association that an unmeasured confounder would need to have with both the treatment and OS to completely explain away a significant association between treatment and OS based on the multivariable model. RESULTS/ANTICIPATED RESULTS: Results from the multivariable model found higher hazards of death for partial regimens (HR=1.81, 95% CI=1.43, 2.29), single agent/RT (HR=1.74, 95% CI=1.30, 2.34), or no treatment (HR=1.98, 95% CI=1.56, 2.552) compared to full regimens. We calculated an E-value for single agent/RT because it has the smallest HR of the treatment levels. The observed HR of 1.74 could be explained away by an unmeasured confounder that was associated with both treatment and OS with a HR of 2.29, above and beyond the measured confounders; the 95% CI could be moved to include the null by an unmeasured confounder that was associated with both the treatment and OS with a HR of 1.69. Of the measured confounders, B symptoms had the strongest relationship with treatment (HR=2.08) and OS (HR=1.38), which was below the E-value. DISCUSSION/SIGNIFICANCE OF FINDINGS: Patients with advanced stage HL who did not receive full chemotherapy regimens had worse 3-year OS, even after adjusting for potential confounders related to the patient and disease. The E-value analysis made explicit the amount of unmeasured confounding necessary to fully explain away the relationship between treatment and OS.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Yin, Xiang, Elizabeth Stuart, Mehmet Burcu, Mark Stewart, Elizabeth B. Lamont e Ruthanna Davi. "Assessing the impact of unmeasured confounding in external control arms via tipping point analyses." Journal of Clinical Oncology 42, n. 16_suppl (1 giugno 2024): e23065-e23065. http://dx.doi.org/10.1200/jco.2024.42.16_suppl.e23065.

Testo completo
Abstract (sommario):
e23065 Background: Estimates of the comparative efficacy of new therapies from single-arm settings can be obtained in advance of RCTs through use of external control arms (ECAs).[1] ECAs are collections of patients with the index disease who were treated outside of the single-arm trial, whose measured baseline attributes are matched to the single-arm trial patients and whose outcomes are compared to trial patients’ to estimate comparative efficacy. Without randomization, observed treatment-outcome associations may be confounded by unmeasured patient attributes. Applying established statistical methods, we estimate the magnitude and prevalence of unmeasured confounding required to move an apparently favorable hazard ratio (HR) to the “tipping point” where the new drug is no longer associated with a favorable survival statistically or clinically. Methods: Studying previously reported patients with multiple myeloma treated with an experimental therapy in a clinical trial (N = 290) and patients treated with standard of care therapy from a rigorously matched ECA (N = 290), we applied the method of Lin et al. to adjust the observed treatment effect (HR and 95% CIs) for overall survival to reflect the impact of a (set of) unmeasured confounder(s).[1,2] Lin’s formula incorporates both the unmeasured confounder’s theoretical association with mortality and its prevalence according to treatment group (i.e., single-arm trial vs. ECA). Results: The observed treatment effect for the single-arm trial treated vs. ECA patients was HR 0.76 (95% CI 0.63-0.91). We estimated the impact of an unmeasured confounder (where HR for overall survival of those patients with and without the confounder is set to 1.5) by its prevalence in each group. When the prevalence of the unmeasured confounder is balanced across groups there is no change in the observed treatment effect. When the presence of the confounder is 70% for ECA patients and absent in the trial patients, the clinical tipping point occurs with loss of the favorable HR (i.e., HR 1.02, 95% CI: 0.85-1.23). Conclusions: While novel analytic methods like ECAs have the potential to accelerate drug development, the lack of randomization raises concern for potential unmeasured confounding. Applying Lin’s method, we illustrate that the impact of unmeasured confounding on HR estimates from a single-arm trial vs.an ECA is a function of both the association with mortality and asymmetries in prevalence. Consistency in the efficacy conclusion for all clinically tenable assumptions indicates a qualitatively reliable conclusion. Friends of Cancer Research whitepaper (2019): available online at https://www.focr.org/sites/default/files/Panel-1_External_Control_Arms2019AM.pdf. Lin D, Psaty B, Kronmal R. Assessing the sensitivity of regression results to unmeasured confounders in observational studies. Biometrics.1998; 54:948–963.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Palta, Mari, e Tzy-Jyun Yao. "Analysis of Longitudinal Data with Unmeasured Confounders". Biometrics 47, n. 4 (dicembre 1991): 1355. http://dx.doi.org/10.2307/2532391.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Szarewski, A., e D. Mansour. "Study subject to unmeasured confounders and biases". BMJ 342, may31 1 (31 maggio 2011): d3349. http://dx.doi.org/10.1136/bmj.d3349.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Navadeh, Soodabeh, Ali Mirzazadeh, Willi McFarland, Phillip Coffin, Mohammad Chehrazi, Kazem Mohammad, Maryam Nazemipour, Mohammad Ali Mansournia, Lawrence C. McCandless e Kimberly Page. "Unsafe Injection Is Associated with Higher HIV Testing after Bayesian Adjustment for Unmeasured Confounding". Archives of Iranian Medicine 23, n. 12 (1 dicembre 2020): 848–55. http://dx.doi.org/10.34172/aim.2020.113.

Testo completo
Abstract (sommario):
Background: To apply a novel method to adjust for HIV knowledge as an unmeasured confounder for the effect of unsafe injection on future HIV testing. Methods: The data were collected from 601 HIV-negative persons who inject drugs (PWID) from a cohort in San Francisco. The panel-data generalized estimating equations (GEE) technique was used to estimate the adjusted risk ratio (RR) for the effect of unsafe injection on not being tested (NBT) for HIV. Expert opinion quantified the bias parameters to adjust for insufficient knowledge about HIV transmission as an unmeasured confounder using Bayesian bias analysis. Results: Expert opinion estimated that 2.5%–40.0% of PWID with unsafe injection had insufficient HIV knowledge; whereas 1.0%–20.0% who practiced safe injection had insufficient knowledge. Experts also estimated the RR for the association between insufficient knowledge and NBT for HIV as 1.1-5.0. The RR estimate for the association between unsafe injection and NBT for HIV, adjusted for measured confounders, was 0.96 (95% confidence interval: 0.89,1.03). However, the RR estimate decreased to 0.82 (95% credible interval: 0.64, 0.99) after adjusting for insufficient knowledge as an unmeasured confounder. Conclusion: Our Bayesian approach that uses expert opinion to adjust for unmeasured confounders revealed that PWID who practice unsafe injection are more likely to be tested for HIV – an association that was not seen by conventional analysis.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

McCandless, Lawrence C. "Meta-Analysis of Observational Studies with Unmeasured Confounders". International Journal of Biostatistics 8, n. 2 (6 gennaio 2012): 1–31. http://dx.doi.org/10.2202/1557-4679.1350.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Flanders, W. Dana. "Negative-Control Exposures: Adjusting for Unmeasured and Measured Confounders With Bounds for Remaining Bias". Epidemiology 34, n. 6 (26 settembre 2023): 850–53. http://dx.doi.org/10.1097/ede.0000000000001650.

Testo completo
Abstract (sommario):
Negative-control exposures can be used to detect and even adjust for confounding that remains after control of measured confounders. A newly described method allows the analyst to reduce residual confounding by unmeasured confounders U by using negative-control exposures to define and select a subcohort wherein the U-distribution among the exposed is similar to that among the unexposed. Here, we show that conventional methods can be used to control for measured confounders in conjunction with the new method to control for unmeasured ones. We also derive an expression for bias that remains after applying this approach. We express remaining bias in terms of a “balancing” parameter and show that this parameter is bounded by a summary variational distance between the U-distribution in the exposed and the unexposed. These measures describe and bound the extent of remaining confounding after using negative controls to adjust for unmeasured confounders with conventional control of measured confounders.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Luiz, Ronir Raggio, e Maria Deolinda Borges Cabral. "Sensitivity analysis for an unmeasured confounder: a review of two independent methods". Revista Brasileira de Epidemiologia 13, n. 2 (giugno 2010): 188–98. http://dx.doi.org/10.1590/s1415-790x2010000200002.

Testo completo
Abstract (sommario):
One of the main purposes of epidemiological studies is to estimate causal effects. Causal inference should be addressed by observational and experimental studies. A strong constraint for the interpretation of observational studies is the possible presence of unobserved confounders (hidden biases). An approach for assessing the possible effects of unobserved confounders may be drawn up through the use of a sensitivity analysis that determines how strong the effects of an unmeasured confounder should be to explain an apparent association, and which should be the characteristics of this confounder to exhibit such an effect. The purpose of this paper is to review and integrate two independent sensitivity analysis methods. The two methods are presented to assess the impact of an unmeasured confounder variable: one developed by Greenland under an epidemiological perspective, and the other developed from a statistical standpoint by Rosenbaum. By combining (or merging) epidemiological and statistical issues, this integration became a more complete and direct sensitivity analysis, encouraging its required diffusion and additional applications. As observational studies are more subject to biases and confounding than experimental settings, the consideration of epidemiological and statistical aspects in sensitivity analysis strengthens the causal inference.
Gli stili APA, Harvard, Vancouver, ISO e altri

Tesi sul tema "Unmeasured confounders"

1

Wang, Yingbo. "Using propensity score to adjust for unmeasured confounders in small area studies of environmental exposures and health". Thesis, Imperial College London, 2015. http://hdl.handle.net/10044/1/51497.

Testo completo
Abstract (sommario):
Small area studies are commonly used in epidemiology to assess the impact of risk factors on health outcomes when data are available at the aggregated level. However the estimates are often biased due to unmeasured confounders which cannot be taken into account. Integrating individual-level information into area-level data in ecological studies may help reduce bias. To investigate this, I develop an area/ecological level propensity score (PS) to integrate individual-level data and then synthesise the area-level PS with routinely available area-level datasets, such as hospital episode statistics (HES) and census data. This framework comprises three steps: 1. Individual level survey data is used to obtain information on the potential confounders, which are not measured at the area-level. Using a Bayesian hierarchical framework I synthesise these variables and calculate PS at the ecological level, taking into the account the correlation among the potential confounders. 2. The calculated PS is included as a scalar quantity in the regression model linking environmental exposure/risk factors and health outcome. As PS has no epidemiological interpretation, I introduce a number of flexible functions to allow for nonlinear effects, such as fixed-knot splines, reversible jump MCMC (RJ) and random walk (RW). 3. As real surveys are typically characterized by a limited coverage compared to small area studies, I impute the ecological PS in the areas with no survey coverage. I propose two new imputation models: random walk and cluster imputation (including a) regression tree and b) profile regression ) to relax the assumption of linearity, and through simulations, both imputation models are proven to produce better results than the traditional linear imputation model. I conclude that integrating individual-level data via PS is a promising method to reduce the bias intrinsic in ecological studies due to unmeasured confounders and I introduce a real application on small area studies for evaluating the effect of air pollution on CVD hospital admissions in England.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Duong, Chi-Hong. "Approches statistiques en pharmacoépidémiologie pour la prise en compte des facteurs de confusion indirectement mesurés dans les bases de données médico-administratives : Application aux médicaments pris au cours de la grossesse". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASR028.

Testo completo
Abstract (sommario):
Les bases de données médico-administratives sont de plus en plus utilisées en pharmacoépidémiologie. Néanmoins, l'existence de facteurs de confusion (FDC) non mesurés et non pris en compte peut biaiser les analyses. Dans ce travail, nous explorons l'intérêt d'exploiter la richesse des données avec la sélection à large échelle d'un grand nombre de covariables mesurées corrélées avec d'éventuels facteurs manquants pour les ajuster indirectement. Ce concept est à la base du score de propension en grande dimension (hdPS), et nous appliquons la même démarche à la G-computation (GC) et l'estimation Ciblée par Maximum de Vraisemblance (TMLE). Bien que ces méthodes aient été évaluées dans certaines études de simulation, leurs performances sur de grandes bases de données réelles restent peu étudiées. Cette thèse vise à évaluer leurs contributions à l'atténuation de l'effet de FDC directement ou indirectement mesurés dans le système national des données de santé (SNDS) pour des études de pharmacoépidémiologie chez la femme enceinte. Dans le chapitre 2, nous avons utilisé un ensemble de médicaments de référence en lien avec la prématurité pour comparer les performances des trois méthodes. Toutes ont diminué le biais de confusion, la GC donnant les meilleures performances. Dans le chapitre 3, nous avons réalisé une analyse par hdPS dans un contexte de modélisation plus complexe pour étudier le lien controversé entre les anti-inflammatoires non stéroïdiens (AINS) et la fausse couche spontanée (FCS). Nous avons implémenté un modèle de Cox avec variable dépendant du temps et l'approche “lag-time” visant à corriger d'autres biais (biais de temps immortel et biais protopathique). Nous avons comparé des analyses basées sur les facteurs d'ajustement choisis selon la littérature actuelle ou avec le hdPS. Dans ces deux types d'analyse, les AINS étaient associés à un surrisque de FCS, les différences observées dans les risques estimés pouvant s'expliquer en partie par la différence entre les estimands théoriques ciblés par les approches. Nos travaux permettent de confirmer la contribution des méthodes statistiques à atténuer le biais de confusion. Ils soulignent aussi des difficultés majeures rencontrées lors de leur application en pratique en lien avec la complexité de la modélisation et du plan d'étude, ainsi qu'avec leur coût computationnel
Healthcare administrative databases are increasingly used in pharmacoepidemiology. However, the existence of unmeasured and uncontrolled confounders can bias analyses. In this work, we explore the value of leveraging the richness of data through large-scale selection of a large number of measured covariates correlated with unmeasured confounders to indirectly adjust for them. This concept is the cornerstone of the High-dimensional propensity score (hdPS), and we apply the same approach to G-computation (GC) and Targeted Maximum Likelihood Estimation (TMLE). Although these methods have been evaluated in some simulation studies, their performance on large real-world databases remains underexplored. This thesis aims to assess their contributions to mitigating the effect of directly or indirectly measured confounders in the French administrative health care database (SNDS) for pharmacoepidemiological studies in pregnant women. In Chapter 2, we used a set of reference drugs related to prematurity to compare the performance of the three methods. All reduced confounding bias, with GC showing the best performance. In Chapter 3, we conducted an hdPS analysis in a more complex modeling setting to investigate the controversial association between non-steroidal anti-inflammatory drugs (NSAIDs) and miscarriage. We implemented a Cox model with time-dependent variables and the “lag-time” approach to address other biases (immortal time bias and protopathic bias). We compared analyses adjusted for factors chosen according to the current literature with those chosen by the hdPS algorithm. In both types of analysis, NSAIDs were associated with an increased risk of miscarriage, and the observed differences in estimated risks could partly be explained by the difference between the causal estimands targeted by the approaches. Our work confirms the contribution of statistical methods to reducing confounding bias. It also highlights major challenges encountered during their practical application, related to the complexity of modeling and study design, as well as their computational cost
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Chien-ChouSu e 蘇建州. "Comparative Mortality Risk of Antipsychotic Medications in Elderly Patients with Stroke: Adjusting for Unmeasured Confounders with Stroke Registry Database". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/mjf9ea.

Testo completo
Abstract (sommario):
博士
國立成功大學
臨床藥學與藥物科技研究所
107
Background: Elderly patients are at risk for developing psychosis after stroke, including delusions, hallucinations, agitation, and disorganized behavior. According to previous guidelines, antipsychotics are the first-line pharmacological intervention for psychosis, but elderly patients who are treated with antipsychotics might have an increased risk of mortality based on US FDA safety communications. However, there are limited studies examining mortality risk associated with antipsychotic use in elderly patients who have had a stroke. The major limitations of these studies include selection bias, immortal time bias, and unmeasured confounders, which can lead to bias related to the relative risks of antipsychotic treatment and result in controversial findings. Objectives: To evaluate prescription patterns and comparative mortality risk of antipsychotic use in elderly patients after a stroke by using an active comparator and new user design with an external adjustment method. Methods, design and setting: We conducted a retrospective cohort study to identify patients aged above 65 years old admitted for stroke in the National Health Insurance Database (NHID) from 2002 to 2014. These patients were not prescribed antipsychotics before their discharge date and were followed until they started to receive antipsychotic treatment. The date of antipsychotic use was set as the index date. The covariates were retrieved from claims during the one-year look-back period prior to the index date. We then linked to multi-center stroke registry databases to retrieve additional variables, including smoking history, body mass index, National Institute of Health Stroke Scale (NHISS), the Barthel index, and the modified Rankin Scale (mRS). Exposure: Antipsychotics covered by the NHI program. Main outcome: One-year all-cause mortality. Secondary outcome: One-year cause-specific mortality. Statistical analysis: Descriptive statistics were used to characterize the baseline demographics and antipsychotic prescription patterns. To compare antipsychotics with respect to risk of all-cause and cause-specific mortality, we performed Cox proportional hazard models using the propensity score calibration (PSC) method to adjust for unmeasured confounders in order to estimate the relative risk among antipsychotics in elderly stroke patients. In addition, in order to avoid the surrogacy assumption due to the use of the PSC method, the two-stage calibration (TSC) method (without the surrogacy assumption) was used to adjust unmeasured confounders and to compare the differences between the PSC and TSC methods. Results: There were 72,441 elderly stroke patients who initiated treatment with antipsychotics during the study period. The proportion of incident use of antipsychotics was 26.2% (2002-2015). The majority of the elderly stroke patients had received only a single antipsychotic treatment (99%), and the most commonly used antipsychotic was quetiapine (39.9%). We selected the antipsychotics, including quetiapine, haloperidol and risperidone, which were prescribed for post-stroke psychosis treatment in previous literature on this topic, and compared the mortality risk among these antipsychotics. In the PSC-adjusted intent to treat analyses, haloperidol [adjusted hazard ratio (aHR)=1.22; 95% confidence interval (CI) 1.18-1.27] and risperidone (aHR=1.31; 95% CI 1.24-1.38) users had a higher mortality risk as compared to quetiapine users. Haloperidol and risperidone exhibited a dose-response related to mortality risk after controlling for confounders. The sensitivity analyses assessing the influence of the study population showed similar patterns. In the cause-specific mortality analyses, risperidone (aHR=1.25; 95% CI 1.14-1.38) users had higher cause-specific mortality from cerebro-cardiovascular disease compared to quetiapine users, but there were no significant differences found in the haloperidol (aHR=1.04 95% CI 0.97-1.12) and quetiapine (reference) users. In addition, we found that the surrogacy assumption was not violated. PSC and TSC methods exhibited similar results in terms of mortality risk related to the use of antipsychotics. Conclusions: The significant variations in the differences in mortality risk among antipsychotic agents suggests that antipsychotic selection and dosing may affect survival in elderly stroke patients. In addition, we also found the surrogacy assumption should be tested to determine whether the assumption is violated when the PSC method is performed to adjust for unmeasured confounders. If this assumption is violated, PSC is far less useful and may even increase bias. When the PSC assumption is violated, the TSC method can provide more precise treatment effects than PSC.
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "Unmeasured confounders"

1

Lash, Timothy L., Aliza K. Fink e Matthew P. Fox. "Unmeasured and Unknown Confounders". In Statistics for Biology and Health, 59–78. New York, NY: Springer New York, 2009. http://dx.doi.org/10.1007/b97920_5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Lash, Timothy L., Aliza K. Fink e Matthew P. Fox. "Unmeasured and Unknown Confounders". In Statistics for Biology and Health, 59–78. New York, NY: Springer New York, 2009. http://dx.doi.org/10.1007/978-0-387-87959-8_5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Rothman, Kenneth J., Krista F. Huybrechts e Eleanor J. Murray. "An Introduction to Some Advanced Topics". In Epidemiology, 281–300. 3a ed. Oxford University PressNew York, 2025. http://dx.doi.org/10.1093/oso/9780197751541.003.0015.

Testo completo
Abstract (sommario):
Abstract Many topics in epidemiology are too specialized for an introductory text. In this final chapter, a few such topics are introduced, providing an on-ramp to further study for those wishing to pursue a deeper knowledge of epidemiologic methods. Since every dataset has some missing data, the chapter starts by outlining approaches to handling missing data in an analysis. Second, it describes how causal diagrams can be helpful to distinguish causal and noncausal relations between study variables and to identify variables that should be accounted for in the analyses. Third, it briefly explains why novel approaches, g-methods, are needed to account for time-varying confounding, which results from a feedback loop between exposure and confounders. Fourth, instrumental variables are introduced as an approach that can theoretically control for confounding even if some confounders remain unmeasured. Finally, quantitative bias analyses offer a structured approach to quantifying the potential impact of systematic errors on study findings.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Chib, Siddhartha. "On Inferring Effects of Binary Treatments with Unobserved Confounders". In Bayesian Statistics 7, 65–84. Oxford University PressOxford, 2003. http://dx.doi.org/10.1093/oso/9780198526155.003.0004.

Testo completo
Abstract (sommario):
Abstract One of the most pervasive problems in statistics is the following. Suppose x is a binary {0,1} indicator of a treatment (where the term treatment is intended to embrace a covariate of interest, not necessarily one arising in a medical or epidemiological setting) and y is a response and the objective is to isolate the effect (“treatment effect” ) of x on y. To fix ideas, x may be an indicator of smoking status taking the value one if the subject is a current smoker and the value zero otherwise and y may be some measure of subject health. For simplicity we assume that both the treatment and response are univariate although this can obviously be relaxed. It is well understood that outside of the experimental setting inferring effects of this kind raise a multitude of challenges that are not easy to address (even in a randomized treatment setting, inference is difficult when human subjects are involved due to dropouts, non-compliance and other such complications). The problem is that when the treatment intake is non-random, as in an observational setting where subjects self-select into a treatment state, the choice of treatment may be influenced by unmeasured or unmeasurable or unobservable covariates that also affect the response.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Savarese, Gianluigi, Marija Polovina e Gerasimos Filippatos. "Clinical trial design and interpretation". In The ESC Textbook of Heart Failure, a cura di Petar M. Seferović, Andrew J. S. Coats, Gerasimos Filippatos, Stefan D. Anker, Johann Bauersachs e Giuseppe Rosano, 925–34. Oxford University PressOxford, 2023. http://dx.doi.org/10.1093/med/9780198891628.003.0083.

Testo completo
Abstract (sommario):
Abstract Randomized controlled trials (RCTs) represent the gold standard to test treatment benefit in medicine because randomization, unlike adjustments in observational studies, allows controlling for any known and unknown, measured and unmeasured confounder. Blinding is a further procedure used in RCTs to reduce risk of bias. Setting adequate selection criteria in RCTs is key to test the population that the intervention is intended for and to enrich the trial for the occurrence of cardiovascular endpoints and reduce the risk of competing events. However, too strict selection criteria might lead to questions bout the generalizability of RCTs findings to real-world populations. Intention-to-treat analysis is the gold standard in RCT analysis but use of per-protocol as supporting sensitivity analysis is highly recommended. Pre-specification when approaching subgroup analysis prevents the selective reporting of results and leads to more credible and valuable findings compared with post-hoc analyses. Novel and more dynamic trial design might make possible running more, more effective, and cheaper randomized controlled trials in the heart failure field.
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Unmeasured confounders"

1

Shimizu, Tatsuhiro. "Diffusion Model in Causal Inference with Unmeasured Confounders". In 2023 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 2023. http://dx.doi.org/10.1109/ssci52147.2023.10372009.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

McCann, Cameron. "Multilevel Mediation With Unmeasured Cluster-Level Confounders: Evaluating Propensity Score Models". In AERA 2024. USA: AERA, 2024. http://dx.doi.org/10.3102/ip.24.2150208.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Batista, Bernardo Pinheiro de Senna Nogueira, Suzana Sales de Aguiar, Ana Carolina Padula Ribeiro Pereira, Rosalina Jorge Koifman e Anke Bergmann. "IMPACT OF BREAST RECONSTRUCTION ON MORTALITY AFTER BREAST CANCER: SURVIVAL ANALYSIS IN A COHORT OF 620 CONSECUTIVE PATIENTS". In Abstracts from the Brazilian Breast Cancer Symposium - BBCS 2021. Mastology, 2021. http://dx.doi.org/10.29289/259453942021v31s2094.

Testo completo
Abstract (sommario):
Background: Access to breast reconstruction is a complex and poorly understood aspect of survival. In the United States, although the rate of immediate reconstruction has tripled in the past 20 years, less than 40% of women undergoing a mastectomy will do so as part of the same procedure. Although there is common understanding that breast reconstruction is oncologically safe, published data on its impact on survival show conflicting and unjustified observations. Methods: We performed a secondary survival analysis in a fixed cohort of 620 consecutive patients who underwent mastectomy between August 2001 and November 2002 in a publicly financed tertiary cancer center. Results: Median followup was 118.4 months (6–172). Of the 620 patients, 253 (40.8%) died during follow-up. And 94 (15.2%) patients underwent breast reconstruction. An unadjusted Cox regression model with breast reconstruction as a time-dependent covariate showed a 60% reduction in the risk of death for patients who underwent reconstruction (crude HR=0.4; 95%CI 0.25–0.65; p <0.001). When adjusted for potential confounders registered in the primary study, the risk reduction was 44% (adjusted HR=0.56; 95%CI 0.34–0.92; p=0.02). Conclusion: Access to breast reconstruction is associated with better survival after mastectomy. Although encouraging, these observations lack biological plausibility and inferences, suggesting that any causal effect is probably driven by confounding and/or interaction with unmeasured variables. The magnitude of the observed association, however, might suggest that, in settings where access to breast reconstruction is severely limited, patient selection for breast reconstruction could be an important drive of the observed association.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Ding, Sihao, Peng Wu, Fuli Feng, Yitong Wang, Xiangnan He, Yong Liao e Yongdong Zhang. "Addressing Unmeasured Confounder for Recommendation with Sensitivity Analysis". In KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3534678.3539240.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Rapporti di organizzazioni sul tema "Unmeasured confounders"

1

Hertel, Thomas, David Hummels, Maros Ivanic e Roman Keeney. How Confident Can We Be in CGE-Based Assessments of Free Trade Agreements? GTAP Working Paper, giugno 2003. http://dx.doi.org/10.21642/gtap.wp26.

Testo completo
Abstract (sommario):
With the proliferation of Free Trade Agreements (FTAs) over the past decade, demand for quantitative analysis of their likely impacts has surged. The main quantitative tool for performing such analysis is Computable General Equilibrium (CGE) modeling. Yet these models have been widely criticized for performing poorly (Kehoe, 2002) and having weak econometric foundations (McKitrick, 1998; Jorgenson, 1984). FTA results have been shown to be particularly sensitive to the trade elasticities, with small trade elasticities generating large terms of trade effects and relatively modest efficiency gains, whereas large trade elasticities lead to the opposite result. Critics are understandably wary of results being determined largely by the authors’ choice of trade elasticities. Where do these trade elasticities come from? CGE modelers typically draw these elasticities from econometric work that uses time series price variation to identify an elasticity of substitution between domestic goods and composite imports (Alaouze, 1977; Alaouze, et al., 1977; Stern et al., 1976; Gallaway, McDaniel and Rivera, 2003). This approach has three problems: the use of point estimates as “truth”, the magnitude of the point estimates, and estimating the relevant elasticity. First, modelers take point estimates drawn from the econometric literature, while ignoring the precision of these estimates. As we will make clear below, the confidence one has in various CGE conclusions depends critically on the size of the confidence interval around parameter estimates. Standard “robustness checks” such as systematically raising or lowering the substitution parameters does not properly address this problem because it ignores information about which parameters we know with some precision and which we do not. A second problem with most existing studies derives from the use of import price series to identify home vs. foreign substitution, for example, tends to systematically understate the true elasticity. This is because these estimates take price variation as exogenous when estimating the import demand functions, and ignore quality variation. When quality is high, import demand and prices will be jointly high. This biases estimated elasticities toward zero. A related point is that the fixed-weight import price series used by most authors are theoretically inappropriate for estimating the elasticities of interest. CGE modelers generally examine a nested utility structure, with domestic production substitution for a CES composite import bundle. The appropriate price series is then the corresponding CES price index among foreign varieties. Constructing such an index requires knowledge of the elasticity of substitution among foreign varieties (see below). By using a fixed-weight import price series, previous estimates place too much weight on high foreign prices, and too small a weight on low foreign prices. In other words, they overstate the degree of price variation that exists, relative to a CES price index. Reconciling small trade volume movements with large import price series movements requires a small elasticity of substitution. This problem, and that of unmeasured quality variation, helps explain why typical estimated elasticities are very small. The third problem with the existing literature is that estimates taken from other researchers’ studies typically employ different levels of aggregation, and exploit different sources of price variation, from what policy modelers have in mind. Employment of elasticities in experiments ill-matched to their original estimation can be problematic. For example, estimates may be calculated at a higher or lower level of aggregation than the level of analysis than the modeler wants to examine. Estimating substitutability across sources for paddy rice gives one a quite different answer than estimates that look at agriculture as a whole. When analyzing Free Trade Agreements, the principle policy experiment is a change in relative prices among foreign suppliers caused by lowering tariffs within the FTA. Understanding the substitution this will induce across those suppliers is critical to gauging the FTA’s real effects. Using home v. foreign elasticities rather than elasticities of substitution among imports supplied from different countries may be quite misleading. Moreover, these “sourcing” elasticities are critical for constructing composite import price series to appropriate estimate home v. foreign substitutability. In summary, the history of estimating the substitution elasticities governing trade flows in CGE models has been checkered at best. Clearly there is a need for improved econometric estimation of these trade elasticities that is well-integrated into the CGE modeling framework. This paper provides such estimation and integration, and has several significant merits. First, we choose our experiment carefully. Our CGE analysis focuses on the prospective Free Trade Agreement of the Americas (FTAA) currently under negotiation. This is one of the most important FTAs currently “in play” in international negotiations. It also fits nicely with the source data used to estimate the trade elasticities, which is largely based on imports into North and South America. Our assessment is done in a perfectly competitive, comparative static setting in order to emphasize the role of the trade elasticities in determining the conventional gains/losses from such an FTA. This type of model is still widely used by government agencies for the evaluation of such agreements. Extensions to incorporate imperfect competition are straightforward, but involve the introduction of additional parameters (markups, extent of unexploited scale economies) as well as structural assumptions (entry/no-entry, nature of inter-firm rivalry) that introduce further uncertainty. Since our focus is on the effects of a PTA we estimate elasticities of substitution across multiple foreign supply sources. We do not use cross-exporter variation in prices or tariffs alone. Exporter price series exhibit a high degree of multicolinearity, and in any case, would be subject to unmeasured quality variation as described previously. Similarly, tariff variation by itself is typically unhelpful because by their very nature, Most Favored Nation (MFN) tariffs are non-discriminatory in nature, affecting all suppliers in the same way. Tariff preferences, where they exist, are often difficult to measure – sometimes being confounded by quantitative barriers, restrictive rules of origin, and other restrictions. Instead we employ a unique methodology and data set drawing on not only tariffs, but also bilateral transportation costs for goods traded internationally (Hummels, 1999). Transportation costs vary much more widely than do tariffs, allowing much more precise estimation of the trade elasticities that are central to CGE analysis of FTAs. We have highly disaggregated commodity trade flow data, and are therefore able to provide estimates that precisely match the commodity aggregation scheme employed in the subsequent CGE model. We follow the GTAP Version 5.0 aggregation scheme which includes 42 merchandise trade commodities covering food products, natural resources and manufactured goods. With the exception of two primary commodities that are not traded, we are able to estimate trade elasticities for all merchandise commodities that are significantly different form zero at the 95% confidence level. Rather than producing point estimates of the resulting welfare, export and employment effects, we report confidence intervals instead. These are based on repeated solution of the model, drawing from a distribution of trade elasticity estimates constructed based on the econometrically estimated standard errors. There is now a long history of CGE studies based on SSA: Systematic Sensitivity Analysis (Harrison and Vinod, 1992; Wigle, 1991; Pagon and Shannon, 1987) Ho
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia