Academic literature on the topic 'Inverse probability (IP) weighting'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Inverse probability (IP) weighting.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Inverse probability (IP) weighting"

1

Switkowski, Karen M., Izzuddin M. Aris, Véronique Gingras, Emily Oken, and Jessica G. Young. "Estimated causal effects of complementary feeding behaviors on early childhood diet quality in a US cohort." American Journal of Clinical Nutrition 115, no. 4 (January 14, 2022): 1105–14. http://dx.doi.org/10.1093/ajcn/nqac003.

Full text
Abstract:
ABSTRACT Background Complementary feeding (CF) provides an opportunity to shape children's future dietary habits, setting the foundation for good nutrition and health. Objectives We estimated effects of 3 CF behaviors on early childhood diet quality using inverse probability (IP) weighting of marginal structural models (MSMs). Methods Among 1041 children from the Boston-area Project Viva cohort, we estimated effects on the mean Youth Healthy Eating Index (YHEI) score in early childhood of 1) delayed (≥12 mo) compared with early (<12 mo) introduction of sweets and fruit juice; 2) continued compared with ceased offering of initially refused foods; and 3) early (<12 mo) compared with late (≥12 mo) introduction of flavor/texture variety. Mothers reported CF behaviors at 1 y and completed FFQs for children in early childhood (median age: 3.1 y). We estimated average treatment effects (ATEs) using IP weighting of MSMs to adjust for both confounding and selection bias due to censored outcomes and examined effect modification by child sex and breastfeeding compared with formula feeding at 6 mo. Results Twelve percent of mothers delayed introducing sweets/fruit juice, 93% continued offering initially refused foods, and 32% introduced flavor/texture variety early. The mean ± SD YHEI score was 52.8 ± 9.2 points. In adjusted models, we estimated a higher mean YHEI score with delayed (compared with early) sweets and fruit juice among breastfeeding children (ATE: 4.5 points; 95% CI: 1.0, 7.4 points), as well as with continued (compared with ceased) offering of refused foods among females (ATE: 5.4 points; 95% CI: 0.8, 9.1 points). The ATE for early (compared with late) flavor/texture variety was 1.7 points (95% CI: 0.3, 3.2 points) overall and stronger (2.8 points; 95% CI: 0.7, 5.1 points) among the formula-fed group. Conclusions Delayed introduction of sweets/juice, continued offering of refused foods, and early flavor/texture variety may all result in higher childhood diet quality. Effects may depend on child sex and infant breastfeeding status.
APA, Harvard, Vancouver, ISO, and other styles
2

Hurvitz, Sara A., Annie Guerin, Melissa G. Brammer, Ellie Guardino, Zheng-Yi Zhou, Michael S. Kaminsky, Eric Q. Wu, and Deepa Lalla. "Comprehensive investigation of adverse event (AE)-related costs in patients with metastatic breast cancer (MBC) treated with first- and second-line chemotherapies." Journal of Clinical Oncology 30, no. 15_suppl (May 20, 2012): 1037. http://dx.doi.org/10.1200/jco.2012.30.15_suppl.1037.

Full text
Abstract:
1037 Background: MBC is incurable and managed with ongoing therapy. This study examined the incremental costs of chemotherapy-associated AEs in MBC. Methods: The PharMetrics Integrated Database (2000-2010) was used to identify MBC pts treated either 1st or 2nd line with a taxane (T) (paclitaxel or docetaxel) or capecitabine (C)-based regimen for ≥30 days (defined as a treatment episode (TE)). Incremental costs attributable to AEs were assessed by comparing costs incurred during TEs with and without AEs. AEs were identified using medical claims with a diagnosis for ≥1 event of interest (e.g., infections, fatigue, anemia, neutropenia). Pt characteristics were balanced between comparison groups (with and w/o AEs) using inverse probability weighting method. Incremental monthly costs due to AEs were estimated during the TEs and included the following cost components: inpt (IP), outpt (OP), emergency room (ER), other medical service, pharmacy costs (chemotherapy and other drugs), and total healthcare costs. Statistical comparisons were conducted using Wilcoxon tests. Results: 3,222 women (mean age=57) received a T or C as 1st or 2nd-line therapy for MBC. Of the 2,678 1st-line pts, 69.7% received T and 30.3% with C; average monthly total costs ranged from $9,159 to $10,298. AEs were commonly seen in pts treated with 1st-line T and C (94.6% and 83.7%). On average, the total monthly incremental cost associated with AEs was 38% higher ($3,547) for T and 9% higher ($854) for C. IP and other drug costs accounted for a majority of these costs. Of 1,084 2nd-line pts, 66% received T and 34% C, with average monthly total costs ranging from $5,950 to $12,979. 94.4% of T pts and 84% of C pts in the 2nd-line had an AE. The average total monthly incremental cost associated with AEs for T was $5,320 and $4,933 for C (69.5% and 82.9% higher vs pts w/o AEs). Pharmacy costs accounted for a majority of increased costs seen in pts with AEs treated with T; IP and OP accounted for a majority of these costs in pts treated with C. Conclusions: This is the 1st study assessing costs associated with AEs for tx of mBC. AEs are associated with a substantial economic burden that is mainly explained by increased IP, OP, and pharmacy costs.
APA, Harvard, Vancouver, ISO, and other styles
3

Halpern, Elkan F. "Behind the Numbers: Inverse Probability Weighting." Radiology 271, no. 3 (June 2014): 625–28. http://dx.doi.org/10.1148/radiol.14140035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Skinner, C. J., and D'arrigo. "Inverse probability weighting for clustered nonresponse." Biometrika 98, no. 4 (November 24, 2011): 953–66. http://dx.doi.org/10.1093/biomet/asr058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ma, Xinwei, and Jingshen Wang. "Robust Inference Using Inverse Probability Weighting." Journal of the American Statistical Association 115, no. 532 (October 16, 2019): 1851–60. http://dx.doi.org/10.1080/01621459.2019.1660173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, Yunji, Roland A. Matsouaka, and Laine Thomas. "Propensity score weighting under limited overlap and model misspecification." Statistical Methods in Medical Research 29, no. 12 (July 21, 2020): 3721–56. http://dx.doi.org/10.1177/0962280220940334.

Full text
Abstract:
Propensity score weighting methods are often used in non-randomized studies to adjust for confounding and assess treatment effects. The most popular among them, the inverse probability weighting, assigns weights that are proportional to the inverse of the conditional probability of a specific treatment assignment, given observed covariates. A key requirement for inverse probability weighting estimation is the positivity assumption, i.e. the propensity score must be bounded away from 0 and 1. In practice, violations of the positivity assumption often manifest by the presence of limited overlap in the propensity score distributions between treatment groups. When these practical violations occur, a small number of highly influential inverse probability weights may lead to unstable inverse probability weighting estimators, with biased estimates and large variances. To mitigate these issues, a number of alternative methods have been proposed, including inverse probability weighting trimming, overlap weights, matching weights, and entropy weights. Because overlap weights, matching weights, and entropy weights target the population for whom there is equipoise (and with adequate overlap) and their estimands depend on the true propensity score, a common criticism is that these estimators may be more sensitive to misspecifications of the propensity score model. In this paper, we conduct extensive simulation studies to compare the performances of inverse probability weighting and inverse probability weighting trimming against those of overlap weights, matching weights, and entropy weights under limited overlap and misspecified propensity score models. Across the wide range of scenarios we considered, overlap weights, matching weights, and entropy weights consistently outperform inverse probability weighting in terms of bias, root mean squared error, and coverage probability.
APA, Harvard, Vancouver, ISO, and other styles
7

Seaman, Shaun R., Ian R. White, Andrew J. Copas, and Leah Li. "Combining Multiple Imputation and Inverse‐Probability Weighting." Biometrics 68, no. 1 (November 3, 2011): 129–37. http://dx.doi.org/10.1111/j.1541-0420.2011.01666.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

McCaffrey, D. F., J. R. Lockwood, and C. M. Setodji. "Inverse probability weighting with error-prone covariates." Biometrika 100, no. 3 (June 24, 2013): 671–80. http://dx.doi.org/10.1093/biomet/ast022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Avagyan, Vahe, and Stijn Vansteelandt. "Stable inverse probability weighting estimation for longitudinal studies." Scandinavian Journal of Statistics 48, no. 3 (July 8, 2021): 1046–67. http://dx.doi.org/10.1111/sjos.12542.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sjölander, Arvid. "Estimation of attributable fractions using inverse probability weighting." Statistical Methods in Medical Research 20, no. 4 (March 11, 2010): 415–28. http://dx.doi.org/10.1177/0962280209349880.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Inverse probability (IP) weighting"

1

Liu, Yang. "Analysis of Dependently Truncated Sample Using Inverse Probability Weighted Estimator." Digital Archive @ GSU, 2011. http://digitalarchive.gsu.edu/math_theses/110.

Full text
Abstract:
Many statistical methods for truncated data rely on the assumption that the failure and truncation time are independent, which can be unrealistic in applications. The study cohorts obtained from bone marrow transplant (BMT) registry data are commonly recognized as truncated samples, the time-to-failure is truncated by the transplant time. There are clinical evidences that a longer transplant waiting time is a worse prognosis of survivorship. Therefore, it is reasonable to assume the dependence between transplant and failure time. To better analyze BMT registry data, we utilize a Cox analysis in which the transplant time is both a truncation variable and a predictor of the time-to-failure. An inverse-probability-weighted (IPW) estimator is proposed to estimate the distribution of transplant time. Usefulness of the IPW approach is demonstrated through a simulation study and a real application.
APA, Harvard, Vancouver, ISO, and other styles
2

Afonso, Lutcy Menezes. "Correcting for attrition in panel data using inverse probability weighting : an application to the european bank system." Master's thesis, Instituto Superior de Economia e Gestão, 2015. http://hdl.handle.net/10400.5/8155.

Full text
Abstract:
Mestrado em Econometria Aplicada e Previsão
Esta dissertação analiza técnicas de correção do efeito do enviesamento que pode ocorrer no caso dos dados utilizados apresentarem valores em falta. Tais técnicas serão aplicadas a um modelo económico para caracterização da margem líquida de juros (MLJ) bancária, utilizando dados provinientes 15 países que pertencem ao sistema bancário da União Europeia (UE15). As variáveis que caracterizam os bancos são observados entre de 2004 e 2010. E são escolhidas seguindo Valverde et al. (2007). Adicionalmente aos regressores são acrescentadas algumas variáveis macroeconómicas. A seleção proviniente da falta de alguns valores para os regressores é tratada através da ponderação probabilistica inversa. Os ponderadores são aplicados a estimadores GMM para um modelo de dados de painel dinámico.
This thesis discusses techniques to correct for the potentially biasing effects of missing data. We apply the techniques on an economic model that explains the Net Interest margin (NIM) of banks, using data from 15 countries that are part of the European Union (EU15) banking system. The variables that describe banks cover the period 2004 and 2010. We use the variables that were also used in Carbó-Valverde and Fernndez (2007). In addition, also macroeconomic variables are used as regressors. The selection that occurs as a consequence of missing values in these regressor variables is dealt with by means of Inverse Probability Weighting (IPW) techniques. The weights are applied to a GMM estimator for a dynamic panel data model that would have been consistent in the absence of missing data.
APA, Harvard, Vancouver, ISO, and other styles
3

Nåtman, Jonatan. "The performance of inverse probability of treatment weighting and propensity score matching for estimating marginal hazard ratios." Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385502.

Full text
Abstract:
Propensity score methods are increasingly being used to reduce the effect of measured confounders in observational research. In medicine, censored time-to-event data is common. Using Monte Carlo simulations, this thesis evaluates the performance of nearest neighbour matching (NNM) and inverse probability of treatment weighting (IPTW) in combination with Cox proportional hazards models for estimating marginal hazard ratios. Focus is on the performance for different sample sizes and censoring rates, aspects which have not been fully investigated in this context before. The results show that, in the absence of censoring, both methods can reduce bias substantially. IPTW consistently had better performance in terms of bias and MSE compared to NNM. For the smallest examined sample size with 60 subjects, the use of IPTW led to estimates with bias below 15 %. Since the data were generated using a conditional parametrisation, the estimation of univariate models violates the proportional hazards assumption. As a result, censoring the data led to an increase in bias.
APA, Harvard, Vancouver, ISO, and other styles
4

Diop, Serigne Arona, and Serigne Arona Diop. "Comparing inverse probability of treatment weighting methods and optimal nonbipartite matching for estimating the causal effect of a multicategorical treatment." Master's thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/34507.

Full text
Abstract:
Des débalancements des covariables entre les groupes de traitement sont souvent présents dans les études observationnelles et peuvent biaiser les comparaisons entre les traitements. Ce biais peut notamment être corrigé grâce à des méthodes de pondération ou d’appariement. Ces méthodes de correction ont rarement été comparées dans un contexte de traitement à plusieurs catégories (>2). Nous avons mené une étude de simulation pour comparer une méthode d’appariement optimal non-biparti, la pondération par probabilité inverse de traitement ainsi qu’une pondération modifiée analogue à l’appariement (matching weights). Ces comparaisons ont été effectuées dans le cadre de simulation de type Monte Carlo à travers laquelle une variable d’exposition à 3 groupes a été utilisée. Une étude de simulation utilisant des données réelles (plasmode) a été conduite et dans laquelle la variable de traitement avait 5 catégories. Parmi toutes les méthodes comparées, celle du matching weights apparaît comme étant la plus robuste selon le critère de l’erreur quadratique moyenne. Il en ressort, aussi, que les résultats de la pondération par probabilité inverse de traitement peuvent parfois être améliorés par la troncation. De plus, la performance de la pondération dépend du niveau de chevauchement entre les différents groupes de traitement. La performance de l’appariement optimal nonbiparti est, quant à elle, fortement tributaire de la distance maximale pour qu’une paire soit formée (caliper). Toutefois, le choix du caliper optimal n’est pas facile et demeure une question ouverte. De surcroît, les résultats obtenus avec la simulation plasmode étaient positifs, dans la mesure où une réduction importante du biais a été observée. Toutes les méthodes ont pu réduire significativement le biais de confusion. Avant d’utiliser la pondération de probabilité inverse de traitement, il est recommandé de vérifier la violation de l’hypothèse de positivité ou l’existence de zones de chevauchement entre les différents groupes de traitement
Des débalancements des covariables entre les groupes de traitement sont souvent présents dans les études observationnelles et peuvent biaiser les comparaisons entre les traitements. Ce biais peut notamment être corrigé grâce à des méthodes de pondération ou d’appariement. Ces méthodes de correction ont rarement été comparées dans un contexte de traitement à plusieurs catégories (>2). Nous avons mené une étude de simulation pour comparer une méthode d’appariement optimal non-biparti, la pondération par probabilité inverse de traitement ainsi qu’une pondération modifiée analogue à l’appariement (matching weights). Ces comparaisons ont été effectuées dans le cadre de simulation de type Monte Carlo à travers laquelle une variable d’exposition à 3 groupes a été utilisée. Une étude de simulation utilisant des données réelles (plasmode) a été conduite et dans laquelle la variable de traitement avait 5 catégories. Parmi toutes les méthodes comparées, celle du matching weights apparaît comme étant la plus robuste selon le critère de l’erreur quadratique moyenne. Il en ressort, aussi, que les résultats de la pondération par probabilité inverse de traitement peuvent parfois être améliorés par la troncation. De plus, la performance de la pondération dépend du niveau de chevauchement entre les différents groupes de traitement. La performance de l’appariement optimal nonbiparti est, quant à elle, fortement tributaire de la distance maximale pour qu’une paire soit formée (caliper). Toutefois, le choix du caliper optimal n’est pas facile et demeure une question ouverte. De surcroît, les résultats obtenus avec la simulation plasmode étaient positifs, dans la mesure où une réduction importante du biais a été observée. Toutes les méthodes ont pu réduire significativement le biais de confusion. Avant d’utiliser la pondération de probabilité inverse de traitement, il est recommandé de vérifier la violation de l’hypothèse de positivité ou l’existence de zones de chevauchement entre les différents groupes de traitement
APA, Harvard, Vancouver, ISO, and other styles
5

KATO, Ryo, and Dan HU. "Auditor Size as a Measure for Audit Quality : A Japanese Study." 名古屋大学大学院経済学研究科附属国際経済政策研究センター, 2014. http://hdl.handle.net/2237/20455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Schmidl, Ricarda. "Empirical essays on job search behavior, active labor market policies, and propensity score balancing methods." Phd thesis, Universität Potsdam, 2014. http://opus.kobv.de/ubp/volltexte/2014/7114/.

Full text
Abstract:
In Chapter 1 of the dissertation, the role of social networks is analyzed as an important determinant in the search behavior of the unemployed. Based on the hypothesis that the unemployed generate information on vacancies through their social network, search theory predicts that individuals with large social networks should experience an increased productivity of informal search, and reduce their search in formal channels. Due to the higher productivity of search, unemployed with a larger network are also expected to have a higher reservation wage than unemployed with a small network. The model-theoretic predictions are tested and confirmed empirically. It is found that the search behavior of unemployed is significantly affected by the presence of social contacts, with larger networks implying a stronger substitution away from formal search channels towards informal channels. The substitution is particularly pronounced for passive formal search methods, i.e., search methods that generate rather non-specific types of job offer information at low relative cost. We also find small but significant positive effects of an increase of the network size on the reservation wage. These results have important implications on the analysis of the job search monitoring or counseling measures that are usually targeted at formal search only. Chapter 2 of the dissertation addresses the labor market effects of vacancy information during the early stages of unemployment. The outcomes considered are the speed of exit from unemployment, the effects on the quality of employment and the short-and medium-term effects on active labor market program (ALMP) participation. It is found that vacancy information significantly increases the speed of entry into employment; at the same time the probability to participate in ALMP is significantly reduced. Whereas the long-term reduction in the ALMP arises in consequence of the earlier exit from unemployment, we also observe a short-run decrease for some labor market groups which suggest that caseworker use high and low intensity activation measures interchangeably which is clearly questionable from an efficiency point of view. For unemployed who find a job through vacancy information we observe a small negative effect on the weekly number of hours worked. In Chapter 3, the long-term effects of participation in ALMP are assessed for unemployed youth under 25 years of age. Complementary to the analysis in Chapter 2, the effects of participation in time- and cost-intensive measures of active labor market policies are examined. In particular we study the effects of job creation schemes, wage subsidies, short-and long-term training measures and measures to promote the participation in vocational training. The outcome variables of interest are the probability to be in regular employment, and participation in further education during the 60 months following program entry. The analysis shows that all programs, except job creation schemes have positive and long-term effects on the employment probability of youth. In the short-run only short-term training measures generate positive effects, as long-term training programs and wage subsidies exhibit significant locking-in'' effects. Measures to promote vocational training are found to increase the probability of attending education and training significantly, whereas all other programs have either no or a negative effect on training participation. Effect heterogeneity with respect to the pre-treatment level education shows that young people with higher pre-treatment educational levels benefit more from participation most programs. However, for longer-term wage subsidies we also find strong positive effects for young people with low initial education levels. The relative benefit of training measures is higher in West than in East Germany. In the evaluation studies of Chapters 2 and 3 semi-parametric balancing methods of Propensity Score Matching (PSM) and Inverse Probability Weighting (IPW) are used to eliminate the effects of counfounding factors that influence both the treatment participation as well as the outcome variable of interest, and to establish a causal relation between program participation and outcome differences. While PSM and IPW are intuitive and methodologically attractive as they do not require parametric assumptions, the practical implementation may become quite challenging due to their sensitivity to various data features. Given the importance of these methods in the evaluation literature, and the vast number of recent methodological contributions in this field, Chapter 4 aims to reduce the knowledge gap between the methodological and applied literature by summarizing new findings of the empirical and statistical literature and practical guidelines for future applied research. In contrast to previous publications this study does not only focus on the estimation of causal effects, but stresses that the balancing challenge can and should be discussed independent of question of causal identification of treatment effects on most empirical applications. Following a brief outline of the practical implementation steps required for PSM and IPW, these steps are presented in detail chronologically, outlining practical advice for each step. Subsequently, the topics of effect estimation, inference, sensitivity analysis and the combination with parametric estimation methods are discussed. Finally, new extensions of the methodology and avenues for future research are presented.
In Kapitel 1 der Dissertation wird die Rolle von sozialen Netzwerken als Determinante im Suchverhalten von Arbeitslosen analysiert. Basierend auf der Hypothese, dass Arbeitslose durch ihr soziales Netzwerk Informationen über Stellenangebote generieren, sollten Personen mit großen sozialen Netzwerken eine erhöhte Produktivität ihrer informellen Suche erfahren, und ihre Suche in formellen Kanälen reduzieren. Durch die höhere Produktivität der Suche sollte für diese Personen zudem der Reservationslohn steigen. Die modelltheoretischen Vorhersagen werden empirisch getestet, wobei die Netzwerkinformationen durch die Anzahl guter Freunde, sowie Kontakthäufigkeit zu früheren Kollegen approximiert wird. Die Ergebnisse zeigen, dass das Suchverhalten der Arbeitslosen durch das Vorhandensein sozialer Kontakte signifikant beeinflusst wird. Insbesondere sinkt mit der Netzwerkgröße formelle Arbeitssuche - die Substitution ist besonders ausgeprägt für passive formelle Suchmethoden, d.h. Informationsquellen die eher unspezifische Arten von Jobangeboten bei niedrigen relativen Kosten erzeugen. Im Einklang mit den Vorhersagen des theoretischen Modells finden sich auch deutlich positive Auswirkungen einer Erhöhung der Netzwerkgröße auf den Reservationslohn. Kapitel 2 befasst sich mit den Arbeitsmarkteffekten von Vermittlungsangeboten (VI) in der frühzeitigen Aktivierungsphase von Arbeitslosen. Die Nutzung von VI könnte dabei eine „doppelte Dividende“ versprechen. Zum einen reduziert die frühe Aktivierung die Dauer der Arbeitslosigkeit, und somit auch die Notwendigkeit späterer Teilnahme in Arbeitsmarktprogrammen (ALMP). Zum anderen ist die Aktivierung durch Information mit geringeren locking-in‘‘ Effekten verbunden als die Teilnahme in ALMP. Ziel der Analyse ist es, die Effekte von frühen VI auf die Eingliederungsgeschwindigkeit, sowie die Teilnahmewahrscheinlichkeit in ALMP zu messen. Zudem werden mögliche Effekte auf die Qualität der Beschäftigung untersucht. Die Ergebnisse zeigen, dass VI die Beschäftigungswahrscheinlichkeit signifikant erhöhen, und dass gleichzeitig die Wahrscheinlichkeit in ALMP teilzunehmen signifikant reduziert wird. Für die meisten betrachteten Subgruppen ergibt sich die langfristige Reduktion der ALMP Teilnahme als Konsequenz der schnelleren Eingliederung. Für einzelne Arbeitsmarktgruppen ergibt sich zudem eine frühe und temporare Reduktion, was darauf hinweist, dass Maßnahmen mit hohen und geringen „locking-in“ Effekten aus Sicht der Sachbearbeiter austauschbar sind, was aus Effizienzgesichtspunkten fragwürdig ist. Es wird ein geringer negativer Effekt auf die wöchentliche Stundenanzahl in der ersten abhängigen Beschäftigung nach Arbeitslosigkeit beobachtet. In Kapitel 3 werden die Langzeiteffekte von ALMP für arbeitslose Jugendliche unter 25 Jahren ermittelt. Die untersuchten ALMP sind ABM-Maßnahmen, Lohnsubventionen, kurz-und langfristige Maßnahmen der beruflichen Bildung sowie Maßnahmen zur Förderung der Teilnahme an Berufsausbildung. Ab Eintritt in die Maßnahme werden Teilnehmer und Nicht-Teilnehmer für einen Zeitraum von sechs Jahren beobachtet. Als Zielvariable wird die Wahrscheinlichkeit regulärer Beschäftigung, sowie die Teilnahme in Ausbildung untersucht. Die Ergebnisse zeigen, dass alle Programme, bis auf ABM, positive und langfristige Effekte auf die Beschäftigungswahrscheinlichkeit von Jugendlichen haben. Kurzfristig finden wir jedoch nur für kurze Trainingsmaßnahmen positive Effekte, da lange Trainingsmaßnahmen und Lohnzuschüsse mit signifikanten locking-in‘‘ Effekten verbunden sind. Maßnahmen zur Förderung der Berufsausbildung erhöhen die Wahrscheinlichkeit der Teilnahme an einer Ausbildung, während alle anderen Programme keinen oder einen negativen Effekt auf die Ausbildungsteilnahme haben. Jugendliche mit höherem Ausbildungsniveau profitieren stärker von der Programmteilnahme. Jedoch zeigen sich für längerfristige Lohnsubventionen ebenfalls starke positive Effekte für Jugendliche mit geringer Vorbildung. Der relative Nutzen von Trainingsmaßnahmen ist höher in West- als in Ostdeutschland. In den Evaluationsstudien der Kapitel 2 und 3 werden die semi-parametrischen Gewichtungsverfahren Propensity Score Matching (PSM) und Inverse Probability Weighting (IPW) verwendet, um den Einfluss verzerrender Faktoren, die sowohl die Maßnahmenteilnahme als auch die Zielvariablen beeinflussen zu beseitigen, und kausale Effekte der Programmteilahme zu ermitteln. Während PSM and IPW intuitiv und methodisch sehr attraktiv sind, stellt die Implementierung der Methoden in der Praxis jedoch oft eine große Herausforderung dar. Das Ziel von Kapitel 4 ist es daher, praktische Hinweise zur Implementierung dieser Methoden zu geben. Zu diesem Zweck werden neue Erkenntnisse der empirischen und statistischen Literatur zusammengefasst und praxisbezogene Richtlinien für die angewandte Forschung abgeleitet. Basierend auf einer theoretischen Motivation und einer Skizzierung der praktischen Implementierungsschritte von PSM und IPW werden diese Schritte chronologisch dargestellt, wobei auch auf praxisrelevante Erkenntnisse aus der methodischen Forschung eingegangen wird. Im Anschluss werden die Themen Effektschätzung, Inferenz, Sensitivitätsanalyse und die Kombination von IPW und PSM mit anderen statistischen Methoden diskutiert. Abschließend werden neue Erweiterungen der Methodik aufgeführt.
APA, Harvard, Vancouver, ISO, and other styles
7

Pingel, Ronnie. "Some Aspects of Propensity Score-based Estimators for Causal Inference." Doctoral thesis, Uppsala universitet, Statistiska institutionen, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-229341.

Full text
Abstract:
This thesis consists of four papers that are related to commonly used propensity score-based estimators for average causal effects. The first paper starts with the observation that researchers often have access to data containing lots of covariates that are correlated. We therefore study the effect of correlation on the asymptotic variance of an inverse probability weighting and a matching estimator. Under the assumptions of normally distributed covariates, constant causal effect, and potential outcomes and a logit that are linear in the parameters we show that the correlation influences the asymptotic efficiency of the estimators differently, both with regard to direction and magnitude. Further, the strength of the confounding towards the outcome and the treatment plays an important role. The second paper extends the first paper in that the estimators are studied under the more realistic setting of using the estimated propensity score. We also relax several assumptions made in the first paper, and include the doubly robust estimator. Again, the results show that the correlation may increase or decrease the variances of the estimators, but we also observe that several aspects influence how correlation affects the variance of the estimators, such as the choice of estimator, the strength of the confounding towards the outcome and the treatment, and whether constant or non-constant causal effect is present. The third paper concerns estimation of the asymptotic variance of a propensity score matching estimator. Simulations show that large gains can be made for the mean squared error by properly selecting smoothing parameters of the variance estimator and that a residual-based local linear estimator may be a more efficient estimator for the asymptotic variance. The specification of the variance estimator is shown to be crucial when evaluating the effect of right heart catheterisation, i.e. we show either a negative effect on survival or no significant effect depending on the choice of smoothing parameters.   In the fourth paper, we provide an analytic expression for the covariance matrix of logistic regression with normally distributed regressors. This paper is related to the other papers in that logistic regression is commonly used to estimate the propensity score.
APA, Harvard, Vancouver, ISO, and other styles
8

Duan, Ran. "EVALUATING THE IMPACTS OF ANTIDEPRESSANT USE ON THE RISK OF DEMENTIA." UKnowledge, 2019. https://uknowledge.uky.edu/epb_etds/23.

Full text
Abstract:
Dementia is a clinical syndrome caused by neurodegeneration or cerebrovascular injury. Patients with dementia suffer from deterioration in memory, thinking, behavior and the ability to perform everyday activities. Since there are no cures or disease-modifying therapies for dementia, there is much interest in identifying modifiable risk factors that may help prevent or slow the progression of cognitive decline. Medications are a common focus of this type of research. Importantly, according to a report from the Centers for Disease Control and Prevention (CDC), 19.1% of the population aged 60 and over report taking antidepressants during 2011-2014, and this number tends to increase. However, antidepressant use among the elderly may be concerning because of the potentially harmful effects on cognition. To assess the impacts of antidepressants on the risk of dementia, we conducted three consecutive projects. In the first project, a retrospective cohort study using Marginal Structural Cox Proportional Hazards regression model with Inverse Probability Weighting (IPW) was conducted to evaluate the average causal effects of different classes of antidepressant on the risk of dementia. Potential causal effects of selective serotonin reuptake inhibitors (SSRIs), serotonin and norepinephrine reuptake inhibitors (SNRIs), atypical anti-depressants (AAs) and tri-cyclic antidepressants (TCAs) on the risk of dementia were observed at the 0.05 significance level. Multiple sensitivity analyses supported these findings. Unmeasured confounding is a threat to the validity of causal inference methods. In evaluating the effects of antidepressants, it is important to consider how common comorbidities of depression, such as sleep disorders, may affect both the exposure to anti-depressants and the onset of cognitive impairment. In this dissertation, sleep apnea and rapid-eye-movement behavior disorder (RBD) were unmeasured and thus uncontrolled confounders for the association between antidepressant use and the risk of dementia. In the second project, a bias factor formula for two binary unmeasured confounders was derived in order to account for these variables. Monte Carlo analysis was implemented to estimate the distribution of the bias factor for each class of antidepressant. The effects of antidepressants on the risk of dementia adjusted for both measured and unmeasured confounders were estimated. Sleep apnea and RBD attenuated the effect estimates for SSRI, SNRI and AA on the risk of dementia. In the third project, to account for potential time-varying confounding and observed time-varying treatment, a multi-state Markov chain with three transient states (normal cognition, mild cognitive impairment (MCI), and impaired but not MCI) and two absorbing states (dementia and death) was performed to estimate the probabilities of moving between finite and mutually exclusive cognitive state. This analysis also allowed participants to recover from mild impairments (i.e., mild cognitive impairment, impaired but not MCI) to normal cognition, and accounted for the competing risk of death prior to dementia. These findings supported the results of the main analysis in the first project.
APA, Harvard, Vancouver, ISO, and other styles
9

Farmer, R. E. "Application of marginal structural models with inverse probability of treatment weighting in electronic health records to investigate the benefits and risks of first line type II diabetes treatments." Thesis, London School of Hygiene and Tropical Medicine (University of London), 2017. http://researchonline.lshtm.ac.uk/4646129/.

Full text
Abstract:
Background: Electronic healthcare records (EHRs) provide opportunities to estimate the effects of type two diabetes (T2DM) treatments on outcomes such as cancer and cardiovascular disease. Marginal structural models (MSMs) with inverse probability of treatment weights (IPTW) can correctly estimate the causal effect of time-varying treatment in the presence of time-dependent confounders such as HbA1c. Dynamic MSMs can be used to compare dynamic treatment strategies. This thesis applies weighted MSMs and dynamic MSMs to explore risks and benefits of early-stage T2DM treatments, and considers the practicalities/impact of using these models in a complex clinical setting with a challenging data source. Methods and Findings: A cohort of patients with newly diagnosed T2DM was identified from the Clinical Practice Research Datalink. MSMs with IPTW were used to estimate the causal effect of metformin monotherapy on cancer risk, and the effects of metformin and sulfonylurea monotherapies on risks of MI, stroke, all-cause mortality, and HbA1c trajectory. Dynamic MSMs were implemented to compare HbA1c thresholds for treatment initiation on risks of MI, stroke, all-cause mortality (ACM) and glucose control. No association was found between metformin use and cancer risk. Metformin and sulfonylureas led to better HbA1c control than diet only, as expected, and there was some evidence of reduced MI risk with long-term metformin use. Changes in estimates between standard models and weighted models were generally in the expected direction given hypothesised time-dependent confounding. For stroke and ACM, results were less conclusive, with some suggestions of residual confounding. Higher HbA1c thresholds for treatment initiation reduced the likelihood of reaching target HbA1c, and there was a suggestion that higher initiation thresholds increased MI risk. Conclusions: Fitting weighted MSMs and dynamic MSMs was feasible using routine primary care data. The models appeared to work well in controlling for strong time-dependent confounding with short-term outcomes; results for longer-term outcomes were less conclusive.
APA, Harvard, Vancouver, ISO, and other styles
10

Winter, Audrey. "Modèles d'appariement du greffon à son hôte, gestion de file d'attente et évaluation du bénéfice de survie en transplantation hépatique à partir de la base nationale de l'Agence de la Biomédecine." Thesis, Montpellier, 2017. http://www.theses.fr/2017MONTS024/document.

Full text
Abstract:
La transplantation hépatique (TH) est la seule intervention possible en cas de défaillance hépatique terminale. Une des limitations majeures à la TH est la pénurie d'organes. Pour pallier ce problème, les critères de sélection des donneurs ont été élargis avec l'utilisation de foie de donneurs dits à "critères étendus" (extended criteria donor (ECD)). Cependant, il n'existe pas de définition univoque de ces foies ECD. Un score donneur américain a donc été mis en place : le Donor Risk Index (DRI), pour qualifier ces greffons. Mais à qui doit-on donner ces greffons "limites"? En effet, une utilisation appropriée des greffons ECD pourrait réduire la pénurie d'organes. Le but de cette thèse est d'établir un nouveau système d'allocation des greffons qui permettrait à chaque greffon d'être transplanté au candidat dont la transplantation permettra le plus grand bénéfice de survie et d'évaluer l'appariement entre donneurs et receveurs en tenant compte des greffons ECD.La première étape a consisté à effectuer une validation externe du DRI ainsi que du score qui en découle : l'Eurotransplant-DRI. Toutefois la calibration et la discrimination n'étaient pas maintenus dans la base française. Un nouveau score pronostique donneur a donc été élaboré : le DRI-Optimatch, à l'aide d'un modèle de Cox donneur ajusté sur les covariables receveur. Le modèle a été validé par bootstrap avec correction de la performance par l'optimisme.La seconde étape consista à explorer l'appariement entre donneur et receveur afin d'attribuer les greffons ECD de manière optimale. Il a été tenu compte des critères donneurs et receveurs, tels qu'évalués par le DRI-Optimatch et par le MELD (Model for End-stage Liver Disease, score pronostique receveur), respectivement. La méthode de stratification séquentielle retenue s'inspire du principe de l'essai contrôlé randomisé. Nous avons alors estimé, à l'aide de rapport de risques, quel bénéfice de survie un patient donné (repéré à l'aide du MELD) pourrait avoir avec un greffon donné (repéré à l'aide du DRI-Optimatch) en le comparant avec le groupe de référence composé des patients (même MELD), éligibles à la greffe, restés sur liste dans l'attente d'un meilleur greffon (DRI-Optimatch plus petit).Dans une troisième étape, nous avons développé un système d'allocation basé sur le bénéfice de survie alliant deux grands principes dans l'allocation de greffons; l'urgence et l'utilité. Dans ce type de système, un greffon alloué est attribué au patient avec la plus grande différence entre la durée de vie post-transplantation prédite et la durée estimée sur la liste d'attente pour un donneur spécifique. Ce modèle est principalement basé sur deux modèles de Cox : un pré-greffe et un post-greffe. Dans ces deux modèles l'évènement d'intérêt étant le décès du patient, pour le modèle pré-greffe, la censure dépendante a été prise en compte. En effet, sur liste d'attente le décès est bien souvent censuré par un autre évènement : la transplantation. Une méthode dérivée de l'Inverse Probability of Censoring Weighting a été utilisée pour pondérer chaque observation. De plus, données longitudinales et données de survie ont aussi été utilisées. Un modèle "en partie conditionnel", permettant d'estimer l'effet de covariables dépendantes du temps en présence de censure dépendante, a été utilisé pour modéliser la survie pré-greffe.Après avoir développé un nouveau système d'allocation, la quatrième et dernière étape, nous a permis de l'évaluer à travers de simulation d'évènement discret ou DES : Discret Event Simulation
Liver transplantation (LT) is the only life-saving procedure for liver failure. One of the major impediments to LT is the shortage of organs. To decrease organ shortage, donor selection criteria were expanded with the use of extended criteria donor (ECD). However, an unequivocal definition of these ECD livers was not available. To address this issue, an American Donor Risk Index (DRI) was developed to qualify those grafts. But to whom should those ECD grafts be given? Indeed, a proper use of ECD grafts could reduce organ shortage. The aim of this thesis is to establish a new graft allocation system which would allow each graft to be transplanted in the candidate whose LT will allow the greatest survival benefit; and to evaluate the matching between donors and recipients taking into account ECD grafts.The first step was the external validation of the DRI as well as the resultant Eurotransplant-DRI score. However, calibration and discrimination were not maintained on the French database. A new prognostic donor score: the DRI-Optimatch was then developed using a Cox donor model with adjustment on recipient covariates. The model was validated by bootstrapping with correction of the performance by the optimism.The second step was to explore the matching between donors and recipients in order to allocate ECD grafts optimally. Consideration should be given to the donor and recipient criteria, as assessed by the DRI-Optimatch and the Model for End-stage Liver Disease (MELD), respectively. The sequential stratification method retained is based on the randomized controlled trial principle. We then estimated, through hazard ratios, the survival benefit for different categories of MELD and DRI-Optimatch compared against the group of candidates remaining on the wait list (WL) and waiting for a transplant with a graft of better quality (lower DRI-Optimatch).In the third step, we have developed an allocation system based on survival benefit combining the two main principles in graft allocation; urgency and utility. In this system, a graft is allocated to the patient with the greatest difference between the predicted post-transplant life and the estimated waiting time for a specific donor. This model is mainly based on two Cox models: pre-LT and post-LT. In these two models the event of interest being the death of the patient, for the pre-graft model, the dependent censoring was taken into account. Indeed, on the WL, death is often censored by another event: transplantation. A method derived from Inverse Probability of Censoring Weighting was used to weight each observation. In addition, longitudinal data and survival data were also used. A partly conditional model, to estimate the effect of time-dependent covariates in the presence of dependent censoring, was therefore used for the pre-LT model.After developing a new allocation system, the fourth and final step was to evaluate it through Discrete Event Simulation (DES)
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Inverse probability (IP) weighting"

1

Byker, Tanya, and Italo Gutierrez. Treatment Effects Using Inverse Probability Weighting and Contaminated Treatment Data: An Application to the Evaluation of a Government Female Sterilization Campaign in Peru. RAND Corporation, 2016. http://dx.doi.org/10.7249/wr1118-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Inverse probability (IP) weighting"

1

Zhang, Ying, and Mei-Jie Zhang. "Inference of Transition Probabilities in Multi-State Models Using Adaptive Inverse Probability Censoring Weighting Technique." In Statistical Modeling in Biomedical Research, 449–81. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-33416-1_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kudowa, Evaristar N., and Mavuto F. Mukaka. "Application of Multiple Imputation, Inverse Probability Weighting, and Double Robustness in Determining Blood Donor Deferral Characteristics in Malawi." In Modern Biostatistical Methods for Evidence-Based Global Health Research, 457–74. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11012-2_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Breslow, Norman E., and Noel Weiss. "Inverse Probability Weighting in Nested Case-Control Studies." In Handbook of Statistical Methods for Case-Control Studies, 351–71. Chapman and Hall/CRC, 2018. http://dx.doi.org/10.1201/9781315154084-19.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Inverse probability (IP) weighting"

1

Leite, Walter. "Quasi-Experimental Evaluation of Usage of Virtual Learning Environments: A Latent Class Approach With Inverse Probability of Treatment Weighting." In 2019 AERA Annual Meeting. Washington DC: AERA, 2019. http://dx.doi.org/10.3102/1444726.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ma, Rui, and John B. Ferris. "Terrain Gridding Using a Stochastic Weighting Function." In ASME 2011 Dynamic Systems and Control Conference and Bath/ASME Symposium on Fluid Power and Motion Control. ASMEDC, 2011. http://dx.doi.org/10.1115/dscc2011-6085.

Full text
Abstract:
The development of new stochastic terrain gridding methods are necessitated by new tire and vehicle modeling applications. Currently, grid node locations in the horizontal plane are assumed to be known and only the uncertainty in the vertical height estimates is modeled. This work modifies the current practice of weighting the importance of a particular measured data point (the terrain height at some horizontal location) by the inverse distance between the grid node and that point. A new weighting function is developed to account for the error in the horizontal position of the grid nodes. The geometry of the problem is described and the probability distribution is developed in steps. Although the solution cannot be determined in closed form, an estimate of the median distance is developed within 1% error. This more complete stochastic definition of the terrain can then be used for advanced tire modeling and vehicle simulation.
APA, Harvard, Vancouver, ISO, and other styles
3

Pons, Marion, Sylvie Chevret, Karine Briot, Maria-Antonietta D’agostino, Christian Roux, Maxime Dougados, and Anna Moltó. "FRI0373 5-YEARS TREATMENT EFFECT OF TNF ALPHA INHIBITOR IN EARLY AXIAL SPONDYLOARTHRITIS AND ASSOCIATED FACTORS: AN INVERSE PROBABILITY WEIGHTING ANALYSIS OF THE DESIR COHORT." In Annual European Congress of Rheumatology, EULAR 2019, Madrid, 12–15 June 2019. BMJ Publishing Group Ltd and European League Against Rheumatism, 2019. http://dx.doi.org/10.1136/annrheumdis-2019-eular.3518.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Matsui, Koshi, Mitsuharu Earashi, Takuya Nagata, Akemi Yoshikawa, Wataru Fukushima, Zensei Nozaki, Yasuko Tanada, et al. "Abstract P2-15-12: Effect of Bevacizumab and Eribulin for metastatic breast cancer in the real world evaluated using the propensity score matching analysis (PSMA) and inverse probability of treatment weighting analysis (IPTWA)." In Abstracts: 2019 San Antonio Breast Cancer Symposium; December 10-14, 2019; San Antonio, Texas. American Association for Cancer Research, 2020. http://dx.doi.org/10.1158/1538-7445.sabcs19-p2-15-12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mellado Artigas, Ricard, Bruno Ferreyro, Alfredo Gea, Jordi Mercadal, Gerard Angeles, María Hernández-Sanz, and Carlos Ferrando. "Late Breaking Abstract - Effect of a conservative approach to the start of mechanical ventilation on ventilator-free days in coronavirus disease 2019 (COVID-19) pneumonia after adjustment by inverse probability of treatment weighting." In ERS International Congress 2020 abstracts. European Respiratory Society, 2020. http://dx.doi.org/10.1183/13993003.congress-2020.2005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Huang, Xiang, Hongsheng Liu, Beiji Shi, Zidong Wang, Kang Yang, Yang Li, Min Wang, et al. "A Universal PINNs Method for Solving Partial Differential Equations with a Point Source." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/533.

Full text
Abstract:
In recent years, deep learning technology has been used to solve partial differential equations (PDEs), among which the physics-informed neural networks (PINNs)method emerges to be a promising method for solving both forward and inverse PDE problems. PDEs with a point source that is expressed as a Dirac delta function in the governing equations are mathematical models of many physical processes. However, they cannot be solved directly by conventional PINNs method due to the singularity brought by the Dirac delta function. In this paper, we propose a universal solution to tackle this problem by proposing three novel techniques. Firstly the Dirac delta function is modeled as a continuous probability density function to eliminate the singularity at the point source; secondly a lower bound constrained uncertainty weighting algorithm is proposed to balance the physics-informed loss terms of point source area and the remaining areas; and thirdly a multi-scale deep neural network with periodic activation function is used to improve the accuracy and convergence speed. We evaluate the proposed method with three representative PDEs, and the experimental results show that our method outperforms existing deep learning based methods with respect to the accuracy, the efficiency and the versatility.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Inverse probability (IP) weighting"

1

Schling, Maja, and Nicolás Pazos. El impacto de subsidios inteligentes en la producción agrícola: evidencia innovadora de Argentina utilizando datos de encuesta y de teledetección. Banco Interamericano de Desarrollo, August 2022. http://dx.doi.org/10.18235/0004352.

Full text
Abstract:
Este estudio evalúa el impacto del Programa de Desarrollo Rural y Agricultura Familiar (PRODAF), un proyecto de subsidios inteligentes que benefició al estrato de agricultores familiares en el noreste de Argentina. La evaluación aprovecha dos fuentes complementarias de datos. La primera es una encuesta de hogares agrícolas con una muestra de 898 productores (534 tratados y 364 control) después de la finalización del proyecto. La segunda utiliza la georreferenciación de las parcelas para medir los rendimientos agrícolas con datos satelitales para una submuestra de 195 productores durante un periodo de 10 años. Utilizando la metodología de Inverse Probability Weighting, encontramos que el PRODAF aumentó la tasa de adopción de tecnologías en 21 puntos porcentuales, e incrementó el acceso a crédito en 47 puntos porcentuales. Superar estas barreras permitió a los productores beneficiarios aumentar el valor de sus ventas e ingresos netos, aunque los impactos fueron dispares entre las cuatro cadenas priorizadas. En cambio, el análisis solo detectó un impacto significativo en los rendimientos para la cadena citrícola, lo cual potencialmente se debe al tipo de tecnología adoptada en esta cadena. Por último, construimos el ndice de Vegetación de Diferencia Normalizada (NDVI) para aproximar la productividad en las cadenas algodonera y citrícola. Aplicando el método de Event Study, confirmamos que la adopción de tecnologías es un proceso complejo que solo desarrolla su impacto completo sobre los rendimientos entre el segundo y tercer año post-tratamiento. Además, confirmamos que el uso de datos satelitales es una herramienta efectiva que estima cambios en los rendimientos con precisión y puede servir para monitorear y evaluar este tipo de intervención de forma contemporánea y a bajo costo.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography