Dissertations / Theses on the topic 'Marginal probability'

To see the other types of publications on this topic, follow the link: Marginal probability.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 26 dissertations / theses for your research on the topic 'Marginal probability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tejeda, Abiezer. "Correcting Errors Due to Species Correlations in the Marginal Probability Density Evolution." DigitalCommons@USU, 2013. https://digitalcommons.usu.edu/etd/1472.

Full text
Abstract:
Synthetic biology is an emerging field that integrates and applies engineering design methods to biological systems. Its aim is to make biology an "engineerable" science. Over the years, biologists and engineers alike have abstracted biological systems into functional models that behave similarly to electric circuits, thus the creation of the subfield of genetic circuits. Mathematical models have been devised to simulate the behavior of genetic circuits in silico. Most models can be classified into deterministic and stochastic models. The work in this dissertation is for stochastic models. Although ordinary differential equation (ODE) models are generally amenable to simu- late genetic circuits, they wrongly assume that a system's chemical species vary continuously and deterministically, thus making erroneous predictions when applied to highly stochastic systems. Stochastic methods have been created to take into account the variability, un- predictability, and discrete nature of molecular populations. The most popular stochastic method is the stochastic simulation algorithm (SSA). These methods provide a single path of the overall pool of possible system's behavior. A common practice is to take several inde- pendent SSA simulations and take the average of the aggregate. This approach can perform iv well in low noise systems. However, it produces incorrect results when applied to networks that can take multiple modes or that are highly stochastic. Incremental SSA or iSSA is a set of algorithms that have been created to obtain ag- gregate information from multiple SSA runs. The marginal probability density evolution (MPDE) algorithm is a subset of iSSA which seeks to reveal the most likely "qualitative" behavior of a genetic circuit by providing a marginal probability function or statistical enve- lope for every species in the system, under the appropriate conditions. MPDE assumes that species are statistically independent given the rest of the system. This assumption is satisfied by some systems. However, most of the interesting biological systems, both synthetic and in nature, have correlated species forming conservation laws. Species correlation imposes con- straints in the system that are broken by MPDE. This work seeks to devise a mathematical method and algorithm to correct conservation constraints errors in MPDE. Furthermore, it aims to identify these constraints a priori and efficiently deliver a trustworthy result faithful to the true behavior of the system.
APA, Harvard, Vancouver, ISO, and other styles
2

Nåtman, Jonatan. "The performance of inverse probability of treatment weighting and propensity score matching for estimating marginal hazard ratios." Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385502.

Full text
Abstract:
Propensity score methods are increasingly being used to reduce the effect of measured confounders in observational research. In medicine, censored time-to-event data is common. Using Monte Carlo simulations, this thesis evaluates the performance of nearest neighbour matching (NNM) and inverse probability of treatment weighting (IPTW) in combination with Cox proportional hazards models for estimating marginal hazard ratios. Focus is on the performance for different sample sizes and censoring rates, aspects which have not been fully investigated in this context before. The results show that, in the absence of censoring, both methods can reduce bias substantially. IPTW consistently had better performance in terms of bias and MSE compared to NNM. For the smallest examined sample size with 60 subjects, the use of IPTW led to estimates with bias below 15 %. Since the data were generated using a conditional parametrisation, the estimation of univariate models violates the proportional hazards assumption. As a result, censoring the data led to an increase in bias.
APA, Harvard, Vancouver, ISO, and other styles
3

Ortiz, Huerta Gonzalo. "Determinantes de la brecha de género en la inclusión financiera del Perú durante el 2016." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2019. http://hdl.handle.net/10757/626393.

Full text
Abstract:
La presente investigación tiene como objetivo central identificar cuáles son los principales determinantes que influyen en la brecha de género en la inclusión financiera del Perú durante 2016. En tal sentido, se utiliza la Encuesta Nacional de Demanda de Servicios Financieros y Nivel de Cultura Financiera (ENIF, 2016), en la cual se encuestó a 6,303 individuos seleccionados al azar, formando una muestra representativa de todo el Perú, y se realiza la estimación de modelos de elección discreta (logit y probit). Además, se calculan los impactos marginales de las variables socioeconómicas sobre la posesión de cuentas de ahorro y tarjetas de crédito tanto para varones como mujeres. Los resultados muestran que el nivel educativo es la variable que genera un mayor aumento en la probabilidad de acceder al sistema financiero aunque no de manera muy diferenciada entre géneros; mientras que la posesión de activos, relación de parentesco, residencia y estado civil generan impactos menores en el género femenino.
The main objective of this research is to identify the main determinants that influence the gender gap in the financial inclusion of Peru during 2016. In this sense, the National Survey of Demand for Financial Services and Level of Financial Culture (ENIF, 2016) is used, in which 6,303 randomly selected individuals were surveyed, forming a representative sample of Peru. The estimation of discrete choice models (logit and probit) is made. In addition, the marginal impacts of socioeconomic variables on the possession of savings accounts and credit cards for both men and women are calculated. The results show that the educational level is the variable that generates a greater increase in the probability of accessing the financial system although not in a very differentiated way between genders; while the possession of assets, kinship relationship, residence and marital status generate minor impacts on the female gender.
Trabajo de investigación
APA, Harvard, Vancouver, ISO, and other styles
4

Börsum, Jakob. "Estimating Causal Effects Of Relapse Treatment On The Risk For Acute Myocardial Infarction Among Patients With Diffuse Large B-Cell Lymphoma." Thesis, Uppsala universitet, Statistiska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447241.

Full text
Abstract:
This empirical register study intends to estimate average causal effects of relapse treatment on the risk for acute myocardial infarction (AMI) among patients with Diffuse B-Cell Lymphoma (DLBCL) within the potential outcome framework. The report includes a brief introduction to causal inference and survival anal- ysis and mentions specific causal parameters of interest that will be estimated. A cohort of 2887 Swedish DLBCL patients between 2007 and 2014 were included in the study where 560 patients suffered a relapse. The relapse treatment is hypothesised to be cardiotoxic and induces an increased risk of heart diseases. The identifiability assumptions need to hold to estimate average causal effects and are assessed in this report. The patient cohort is weighted using inverse probability of treatment and censoring weights and potential marginal survival curves are estimated from marginal structural Cox models. The resulting point estimate indicates a protective causal effect of relapse treatment on AMI but estimated bootstrap confidence intervals suggest no significant effect on the 5% significance level.
APA, Harvard, Vancouver, ISO, and other styles
5

Davis, Brett Andrew, and Brett Davis@abs gov au. "Inference for Discrete Time Stochastic Processes using Aggregated Survey Data." The Australian National University. Faculty of Economics and Commerce, 2003. http://thesis.anu.edu.au./public/adt-ANU20040806.104137.

Full text
Abstract:
We consider a longitudinal system in which transitions between the states are governed by a discrete time finite state space stochastic process X. Our aim, using aggregated sample survey data of the form typically collected by official statistical agencies, is to undertake model based inference for the underlying process X. We will develop inferential techniques for continuing sample surveys of two distinct types. First, longitudinal surveys in which the same individuals are sampled in each cycle of the survey. Second, cross-sectional surveys which sample the same population in successive cycles but with no attempt to track particular individuals from one cycle to the next. Some of the basic results have appeared in Davis et al (2001) and Davis et al (2002).¶ Longitudinal surveys provide data in the form of transition frequencies between the states of X. In Chapter Two we develop a method for modelling and estimating the one-step transition probabilities in the case where X is a non-homogeneous Markov chain and transition frequencies are observed at unit time intervals. However, due to their expense, longitudinal surveys are typically conducted at widely, and sometimes irregularly, spaced time points. That is, the observable frequencies pertain to multi-step transitions. Continuing to assume the Markov property for X, in Chapter Three, we show that these multi-step transition frequencies can be stochastically interpolated to provide accurate estimates of the one-step transition probabilities of the underlying process. These estimates for a unit time increment can be used to calculate estimates of expected future occupation time, conditional on an individual’s state at initial point of observation, in the different states of X.¶ For reasons of cost, most statistical collections run by official agencies are cross-sectional sample surveys. The data observed from an on-going survey of this type are marginal frequencies in the states of X at a sequence of time points. In Chapter Four we develop a model based technique for estimating the marginal probabilities of X using data of this form. Note that, in contrast to the longitudinal case, the Markov assumption does not simplify inference based on marginal frequencies. The marginal probability estimates enable estimation of future occupation times (in each of the states of X) for an individual of unspecified initial state. However, in the applications of the technique that we discuss (see Sections 4.4 and 4.5) the estimated occupation times will be conditional on both gender and initial age of individuals.¶ The longitudinal data envisaged in Chapter Two is that obtained from the surveillance of the same sample in each cycle of an on-going survey. In practice, to preserve data quality it is necessary to control respondent burden using sample rotation. This is usually achieved using a mechanism known as rotation group sampling. In Chapter Five we consider the particular form of rotation group sampling used by the Australian Bureau of Statistics in their Monthly Labour Force Survey (from which official estimates of labour force participation rates are produced). We show that our approach to estimating the one-step transition probabilities of X from transition frequencies observed at incremental time intervals, developed in Chapter Two, can be modified to deal with data collected under this sample rotation scheme. Furthermore, we show that valid inference is possible even when the Markov property does not hold for the underlying process.
APA, Harvard, Vancouver, ISO, and other styles
6

Farmer, R. E. "Application of marginal structural models with inverse probability of treatment weighting in electronic health records to investigate the benefits and risks of first line type II diabetes treatments." Thesis, London School of Hygiene and Tropical Medicine (University of London), 2017. http://researchonline.lshtm.ac.uk/4646129/.

Full text
Abstract:
Background: Electronic healthcare records (EHRs) provide opportunities to estimate the effects of type two diabetes (T2DM) treatments on outcomes such as cancer and cardiovascular disease. Marginal structural models (MSMs) with inverse probability of treatment weights (IPTW) can correctly estimate the causal effect of time-varying treatment in the presence of time-dependent confounders such as HbA1c. Dynamic MSMs can be used to compare dynamic treatment strategies. This thesis applies weighted MSMs and dynamic MSMs to explore risks and benefits of early-stage T2DM treatments, and considers the practicalities/impact of using these models in a complex clinical setting with a challenging data source. Methods and Findings: A cohort of patients with newly diagnosed T2DM was identified from the Clinical Practice Research Datalink. MSMs with IPTW were used to estimate the causal effect of metformin monotherapy on cancer risk, and the effects of metformin and sulfonylurea monotherapies on risks of MI, stroke, all-cause mortality, and HbA1c trajectory. Dynamic MSMs were implemented to compare HbA1c thresholds for treatment initiation on risks of MI, stroke, all-cause mortality (ACM) and glucose control. No association was found between metformin use and cancer risk. Metformin and sulfonylureas led to better HbA1c control than diet only, as expected, and there was some evidence of reduced MI risk with long-term metformin use. Changes in estimates between standard models and weighted models were generally in the expected direction given hypothesised time-dependent confounding. For stroke and ACM, results were less conclusive, with some suggestions of residual confounding. Higher HbA1c thresholds for treatment initiation reduced the likelihood of reaching target HbA1c, and there was a suggestion that higher initiation thresholds increased MI risk. Conclusions: Fitting weighted MSMs and dynamic MSMs was feasible using routine primary care data. The models appeared to work well in controlling for strong time-dependent confounding with short-term outcomes; results for longer-term outcomes were less conclusive.
APA, Harvard, Vancouver, ISO, and other styles
7

Toll, Kristopher C. "Using a Discrete Choice Experiment to Estimate Willingness to Pay for Location Based Housing Attributes." DigitalCommons@USU, 2019. https://digitalcommons.usu.edu/etd/7657.

Full text
Abstract:
In 1993, a travel study was conducted along the Wasatch front in Utah (Research Systems Group INC, 2013). The main purpose of this study was to assess travel behavior to understand the needs for future growth in Utah. Since then, the Research Service Group (RSG), conducted a new study in 2012 to understand current travel preferences in Utah. This survey, called the Residential Choice Stated Preference survey, asked respondents to make ten choice comparisons between two hypothetical homes. Each home in the choice comparison was described by different attributes, those attributes that were used are, type of neighborhood, distance from important destinations, distance from access to public transport, street design, parking availability, commute distance to work, and price. The survey was designed to determine the extent to which Utah residents prefer alternative household attributes in a choice selection. Each attribute contained multiple characteristic levels that were randomly combined to define each alternative home in each choice comparison. Those choices can be explained by Random Utility Theory. Multinomial logistic regression will be used to estimate changes in utility when alternative attribute levels are present in a choice comparison. Using the coefficient estimate for price, a marginal willingness to pay (MWTP) for each attribute level will be calculated. This paper will use two different approaches to obtain MWTP estimates. Method One will use housing and rent price to recode the price variable in dollar terms as defined in the discrete choice experiment. Method Two will recode the price variable as an average ten percent change in home value to extrapolate a one-time payment for homes. As a result, we found that it is possible to obtain willingness to pay estimates using both methods. The resulting interpretations in dollar terms became more relatable. Metropolitan planning organization can use these results to understand how residents perceive home value in dollar terms in the context of location-based attributes for homes.
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Dandan. "Amended Estimators of Several Ratios for Categorical Data." Digital Commons @ East Tennessee State University, 2006. https://dc.etsu.edu/etd/2218.

Full text
Abstract:
Point estimation of several association parameters in categorical data are presented. Typically, a constant is added to the frequency counts before the association measure is computed. We will study the accuracy of these adjusted point estimators based on frequentist and Bayesian methods respectively. In particular, amended estimators for the ratio of independent Poisson rates, relative risk, odds ratio, and the ratio of marginal binomial proportions will be examined in terms of bias and mean squared error.
APA, Harvard, Vancouver, ISO, and other styles
9

Liljedahl, Ida, and Ebba Rondahl. "Driving factors for growing companies." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-275667.

Full text
Abstract:
Finding a way to forecast what characteristics make a fast growing company would be useful, both for companies trying to succeed and for investment companies wanting to make successful investments. This thesis aims to develop a model describing the relationship between 9 chosen characteristics, based on real data from 2015 concerning companies that were rewarded with a DI Gasell in 2018. The final result show that half of the variables chosen to form the model have little to no relationship with the response variable EBIT margin. However, the final model consists of four variables that with statistic significance correlates with the response variable. The explanatory level is low and implies that forecasting companies growth probably can’t be done using this model. The four regressors that correlate with EBIT margin are Year of Incorporation, Operatingrevenue, Number of subsidiaries & SNI code. Although a forecast can’t be performed other insight are obtained from the research. Companies with SNI code 4, which corresponds to operating in the economic sector, affects EBIT margin in a more positive way than other sectors. Number of subsidiaries correlates fairly linearly with the response variable. Contradictory to previous research CEO characteristics are shown to be the least important factor contributing to profitability.
Att hitta ett sätt att förutspå vilka egenskaper som skapar ett snabbväxande företag kan vara användbart, både för företag som vill växa men också för investeringsbolag som letar efter gynnsamma investeringar med bra avkastning. Denna avhandling strävar efter att utveckla en modell som beskriver relationen mellan 9 utvalda variabler, baserat på data från år 2015 gällande företag som 2018 tilldelades utmärkelsen ”DI Gasell”. Den slutgiltiga modellen visar att hälften av regressorerna statistiskt signifikant påverkar responsvariabeln EBIT-marginal. Förklaringsgraden för modellen är låg, vilket antyder att sambanden inte är starka nog att kunna förutspå vilka företag som kommer att bli ”DI Gasell” med denna modell. De fyra regressorer som påverkar EBIT-marginalen mest är registreringsår, omsättning, antal dotterbolag och SNI-kod. Trots modellens låga förklaringsvärde kan andra slutsatser dras av undersökningen. Företag i ekonomisektorn påverkar EBIT-marginalen mer positivt än företag inom andra sektorer. Antal dotterbolag korrelerar relativt linjärt med respons variabeln. Till motsats från tidigare studier visar avhandlingen att ålder och kön på VD inte påverkar lönsamheten.
APA, Harvard, Vancouver, ISO, and other styles
10

David, Baker. "Martingales avec marginales spécifies." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2012. http://tel.archives-ouvertes.fr/tel-00760040.

Full text
Abstract:
Cette thèse décrit des méthodes de construction de martingales avec marginales spécifiées. La première collection de méthodes procède par quantization. C'est-à-dire en approximant une mesure par une autre mesure dont le support consiste en un nombre fini de points. Nous introduisons une méthode de quantization qui préserve l'ordre convexe. L'ordre convexe est un ordre partiel sur l'espace des mesures qui les compare en termes de leur dispertion relative. Cette nouvelle méthode de quantization présente l'avantage que si deux mesures admettent une transition de martingale alors les mesures quantisées en admettent aussi. Ceci n'est pas le cas pour la méthode de quantization habituellement utilisée en probabilités (la méthode de quantization L2). Pour les mesures quantifiés nous présentons plusieurs méthodes de construction de transition de martingale. La première méthode procède par programmation linéaire. La deuxième méthode procède par construction de matrices avec diagonale et spectre données. La troisième méthode procède par l'algorithme de Chacon et Walsh. Dans une seconde partie la thèse présente une nouvelle solution au problème du plongement de Skorokhod. Dans une troisième partie la thèse étudie la construction de martingales à temps continu avec marginales données. Des constructions sont données à l'aide du draps Brownien. D'autres constructions sont données en modifiant une méthode développée par Albin, les martingales construites ainsi possèdent une propriété de scaling.. Dans une partie annexe, certaines conséquences de cette théorie concernant le management du risque des options asiatiques, par rapport à leur sensibilité à la volatilité et à la maturité sont établies.
APA, Harvard, Vancouver, ISO, and other styles
11

Wuilmart, Adam, and Erik Harrysson. "Assessing the Operational Value Creation by the Private Equity Industry in the Nordics." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-275693.

Full text
Abstract:
More and more capital is being directed towards the private equity industry. As a result, private equity owned firms make up an increasingly large share of the economy. Therefore, it is becoming more important to understand the nature of how the operational performance of firms change under private equity ownership. This study looked at how the operational efficiency in terms of EBIT-margin changed over a three-year period after a private equity acquisition in the Nordic market. The study found that companies which had an initial positive EBIT margin behaved differently from companies with an initial negative EBIT margin and therefore two separate models where created. It was found that in the case where the company had a positive EBIT margin before being bought by a private equity firm saw an average decrease in EBIT margin of 1.14% units. In the case of a firm with initial negative EBIT-margin a private equity acquisition led to an average increase in EBIT margin by 1.99% units compared to the reference data. This study thus shows that private equity ownership affects the operational efficiency of companies. Moreover, it shows that one should make a distinction between PE ownership in profitable growth cases and turn-around cases of inefficient companies and that the impact of PE ownership in terms of effect on operational profitability can be vastly different depending on the nature of the acquisition in this regard.
Private Equity industrin ser ökande inströmning av investeringskapital, vilket resulterat i att en allt större del av ekonomin utgörs av private equity-ägda företag. Därmed ökar vikten av att förstå hur private equity firmor påverkar sina portföljbolag under ägandeperioden. Denna studie undersöker hur EBIT-marginalen i företag förändrats över en treårsperiod efter att företagen blivit förvärvade av ett nordiskt private equity-bolag. Studien hittade en signifikant skillnad mellan hur företag med initialt positiv, respektive negativ EBIT-marginal påverkades under treårsperioden och två separata modeller skapades för att utvärdera effekten. Resultaten påvisade med signifikans att företag med initial positiv EBIT-marginal minskade sin EBIT-marginal med 1.14% relativt jämförbara företag efter ett private equity förvärv. För företag med initialt negativ EBIT-marginal påvisades med signifikans en ökning av EBIT-marginalen med 1.99% relativt jämförbara företag efter ett private equity förvärv. Studien påvisar därmed att private equity ägande har en påverkan på operationell lönsamhet och att den skiljer sig markant beroende på ifall företaget initialt är operativt lönsamt eller ej.
APA, Harvard, Vancouver, ISO, and other styles
12

Bergquist, Emanuel, and Gustav Thunström. "Propensity score matchning för estimering av en marginell kausal effekt med matchat fall-kontrolldata." Thesis, Umeå universitet, Statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-160784.

Full text
Abstract:
När  en  fall-kontrollstudie  har  genomförts  kan  det  vara  av  intresse  att  genomföra en sekundär analys som studerar fall-kontrollstudiens utfalls effekt på någon annan variabel i populationen. I dessa fall ses fall-kontrollstudiens utfall som en behandling i sekundäranalysen och denna variabels effekt på ett nytt utfall undersöks. I observationsstudier baserade på fall-kontrolldata existerar ofta systematiska skillnader mellan fall- och kontrollgruppen. Om dessa  skillnader  i  bakgrundsvariabler  mellan  grupperna påverkar  både  behandlingen och utfallet kommer det att skapa bias i skattningen av den kausala effekten. Ett sätt att kontrollera för dessa bakgrundsvariabler är genom att matcha på propensity score. Denna  uppsats  består  av  en  simuleringsstudie  där  den  kausala  effekten  på utfallet för de behandlade skattas med hjälp av propensity score matchning i  en  sekundäranalys  av  matchat  fall-kontrolldata.  Syftet är  att  undersöka matchingsestimatorns egenskaper när individernas propensity score skattas med  en  viktad  logistisk  regressionsmodell  gentemot  när  individernas  propensity score skattas med en logistisk regressionsmodell utan vikter. Viktad logistisk regressionsmodell innebär att en behandlings sanna prevalens i populationen  och  populationens subgrupper är  känd  och  inkluderas  i  modellen, vilket resulterar i att skattningar av propensity score kommer att vara väntevärdesriktiga. I den logistiska regressionmodellen utan vikter inkluderas inte den sanna prevalensen när propensity score ska skattas och skattningarna av propensity score kommer inte vara väntevärdesriktiga. Egenskaper som jämförs är bias, standardavvikelse och MSE. Resultatet av uppsatsen visade ingen minskning av MSE när prevalensen avbehandlingen i populationen inkluderades vid skattningen av observationernas propensity score. Estimatorn där behandlingens prevalens inte inkluderades vid skattningen av observationernas propensity score resulterade i lägre bias och MSE, men högre standardavvikelse. Båda estimatorernas bias gickmot noll när stickprovstorleken ökade.
APA, Harvard, Vancouver, ISO, and other styles
13

Freddi, Davide. "sintesi generativa multi-documento con discriminazione della rilevanza mediante probabilità marginale: una soluzione neurale end-to-end per la letteratura medica." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
Nonostante lo sviluppo scientifico in molti task di Natural Language Processing (NLP), il task di multi-document summarization rimane meno esplorato, lasciando ampi margini di miglioramento. Nella comunità medica e scientifica, questo task trova applicazione nella generazione delle revisioni sistematiche della letteratura, che sono articoli che riassumono molti studi su uno stesso argomento. In questo lavoro di tesi si esplorano le principali tecnologie in ambito di NLP e text generation, concentrandosi in particolare su BlenderBot 2, un'architettura sviluppata da Facebook AI che rappresenta lo stato dell'arte tra i modelli di chatbot. Le tecnologie studiate sono poi state estese e adattate al task di abstractive multi-document summarization in campo medico per affrontare la generazione automatica delle revisioni sistematiche della letteratura, proponendo un nuovo approccio che miri a risolvere alcuni dei problemi più comuni dei modelli utilizzati in tale ambito.
APA, Harvard, Vancouver, ISO, and other styles
14

Nenna, Luca. "Numerical Methods for Multi-Marginal Optimal Transportation." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLED017/document.

Full text
Abstract:
Dans cette thèse, notre but est de donner un cadre numérique général pour approcher les solutions des problèmes du transport optimal (TO). L’idée générale est d’introduire une régularisation entropique du problème initial. Le problème régularisé correspond à minimiser une entropie relative par rapport à une mesure de référence donnée. En effet, cela équivaut à trouver la projection d’un couplage par rapport à la divergence de Kullback-Leibler. Cela nous permet d’utiliser l’algorithme de Bregman/Dykstra et de résoudre plusieurs problèmes variationnels liés au TO. Nous nous intéressons particulièrement à la résolution des problèmes du transport optimal multi-marges (TOMM) qui apparaissent dans le cadre de la dynamique des fluides (équations d’Euler incompressible à la Brenier) et de la physique quantique (la théorie de fonctionnelle de la densité ). Dans ces cas, nous montrons que la régularisation entropique joue un rôle plus important que de la simple stabilisation numérique. De plus, nous donnons des résultats concernant l’existence des transports optimaux (par exemple des transports fractals) pour le problème TOMM
In this thesis we aim at giving a general numerical framework to approximate solutions to optimal transport (OT) problems. The general idea is to introduce an entropic regularization of the initialproblems. The regularized problem corresponds to the minimization of a relative entropy with respect a given reference measure. Indeed, this is equivalent to find the projection of the joint coupling with respect the Kullback-Leibler divergence. This allows us to make use the Bregman/Dykstra’s algorithm and solve several variational problems related to OT. We are especially interested in solving multi-marginal optimal transport problems (MMOT) arising in Physics such as in Fluid Dynamics (e.g. incompressible Euler equations à la Brenier) and in Quantum Physics (e.g. Density Functional Theory). In these cases we show that the entropic regularization plays a more important role than a simple numerical stabilization. Moreover, we also give some important results concerning existence and characterization of optimal transport maps (e.g. fractal maps) for MMOT
APA, Harvard, Vancouver, ISO, and other styles
15

Afonso, Lutcy Menezes. "Correcting for attrition in panel data using inverse probability weighting : an application to the european bank system." Master's thesis, Instituto Superior de Economia e Gestão, 2015. http://hdl.handle.net/10400.5/8155.

Full text
Abstract:
Mestrado em Econometria Aplicada e Previsão
Esta dissertação analiza técnicas de correção do efeito do enviesamento que pode ocorrer no caso dos dados utilizados apresentarem valores em falta. Tais técnicas serão aplicadas a um modelo económico para caracterização da margem líquida de juros (MLJ) bancária, utilizando dados provinientes 15 países que pertencem ao sistema bancário da União Europeia (UE15). As variáveis que caracterizam os bancos são observados entre de 2004 e 2010. E são escolhidas seguindo Valverde et al. (2007). Adicionalmente aos regressores são acrescentadas algumas variáveis macroeconómicas. A seleção proviniente da falta de alguns valores para os regressores é tratada através da ponderação probabilistica inversa. Os ponderadores são aplicados a estimadores GMM para um modelo de dados de painel dinámico.
This thesis discusses techniques to correct for the potentially biasing effects of missing data. We apply the techniques on an economic model that explains the Net Interest margin (NIM) of banks, using data from 15 countries that are part of the European Union (EU15) banking system. The variables that describe banks cover the period 2004 and 2010. We use the variables that were also used in Carbó-Valverde and Fernndez (2007). In addition, also macroeconomic variables are used as regressors. The selection that occurs as a consequence of missing values in these regressor variables is dealt with by means of Inverse Probability Weighting (IPW) techniques. The weights are applied to a GMM estimator for a dynamic panel data model that would have been consistent in the absence of missing data.
APA, Harvard, Vancouver, ISO, and other styles
16

Haiman, George. "Comparaison en probabilité et presque sûre du comportement des extrêmes de certaines suites stationnaires à celui de la suite des variables aléatoires equidistribuée de même loi marginale." Paris 6, 1986. http://www.theses.fr/1986PA066091.

Full text
Abstract:
La présente étude concerne les propriétés probabilistes des extrêmes -maximas et minimas partiels- d'une suite de variables aléatoires stationnaires. De nombreux travaux ont été consacrés à la comparaison des extrêmes d'une suite stationnaire à ceux de la suite de variables aléatoires indépendantes de même loi marginale. On montre que pour certaines suites cette comparaison peut être considérablement étendue.
APA, Harvard, Vancouver, ISO, and other styles
17

Filstroff, Louis. "Contributions to probabilistic non-negative matrix factorization - Maximum marginal likelihood estimation and Markovian temporal models." Thesis, Toulouse, INPT, 2019. http://www.theses.fr/2019INPT0143.

Full text
Abstract:
La factorisation en matrices non-négatives (NMF, de l’anglais non-negative matrix factorization) est aujourd’hui l’une des techniques de réduction de la dimensionnalité les plus répandues, dont les domaines d’application recouvrent le traitement du signal audio, l’imagerie hyperspectrale, ou encore les systèmes de recommandation. Sous sa forme la plus simple, la NMF a pour but de trouver une approximation d’une matrice des données non-négative (c’est-à-dire à coefficients positifs ou nuls) par le produit de deux matrices non-négatives, appelées les facteurs. L’une de ces matrices peut être interprétée comme un dictionnaire de motifs caractéristiques des données, et l’autre comme les coefficients d’activation de ces motifs. La recherche de cette approximation de rang faible s’effectue généralement en optimisant une mesure de similarité entre la matrice des données et son approximation. Il s’avère que pour de nombreux choix de mesures de similarité, ce problème est équivalent à l’estimation jointe des facteurs au sens du maximum de vraisemblance sous un certain modèle probabiliste décrivant les données. Cela nous amène à considérer un paradigme alternatif pour la NMF, dans lequel les taches d’apprentissage se portent sur des modèles probabilistes dont la densité d’observation est paramétrisée par le produit des facteurs non-négatifs. Ce cadre général, que nous appelons NMF probabiliste, inclut de nombreux modèles à variables latentes bien connus de la littérature, tels que certains modèles pour des données de compte. Dans cette thèse, nous nous intéressons à des modèles de NMF probabilistes particuliers pour lesquels on suppose une distribution a priori pour les coefficients d’activation, mais pas pour le dictionnaire, qui reste un paramètre déterministe. L'objectif est alors de maximiser la vraisemblance marginale de ces modèles semi-bayésiens, c’est-à-dire la vraisemblance jointe intégrée par rapport aux coefficients d’activation. Cela revient à n’apprendre que le dictionnaire, les coefficients d’activation pouvant être inférés dans un second temps si nécessaire. Nous entreprenons d’approfondir l’étude de ce processus d’estimation. En particulier, deux scénarios sont envisagées. Dans le premier, nous supposons l’indépendance des coefficients d’activation par échantillon. Des résultats expérimentaux antérieurs ont montré que les dictionnaires appris via cette approche avaient tendance à régulariser de manière automatique le nombre de composantes ; une propriété avantageuse qui n’avait pas été expliquée alors. Dans le second, nous levons cette hypothèse habituelle, et considérons des structures de Markov, introduisant ainsi de la corrélation au sein du modèle, en vue d’analyser des séries temporelles
Non-negative matrix factorization (NMF) has become a popular dimensionality reductiontechnique, and has found applications in many different fields, such as audio signal processing,hyperspectral imaging, or recommender systems. In its simplest form, NMF aims at finding anapproximation of a non-negative data matrix (i.e., with non-negative entries) as the product of twonon-negative matrices, called the factors. One of these two matrices can be interpreted as adictionary of characteristic patterns of the data, and the other one as activation coefficients ofthese patterns. This low-rank approximation is traditionally retrieved by optimizing a measure of fitbetween the data matrix and its approximation. As it turns out, for many choices of measures of fit,the problem can be shown to be equivalent to the joint maximum likelihood estimation of thefactors under a certain statistical model describing the data. This leads us to an alternativeparadigm for NMF, where the learning task revolves around probabilistic models whoseobservation density is parametrized by the product of non-negative factors. This general framework, coined probabilistic NMF, encompasses many well-known latent variable models ofthe literature, such as models for count data. In this thesis, we consider specific probabilistic NMFmodels in which a prior distribution is assumed on the activation coefficients, but the dictionary remains a deterministic variable. The objective is then to maximize the marginal likelihood in thesesemi-Bayesian NMF models, i.e., the integrated joint likelihood over the activation coefficients.This amounts to learning the dictionary only; the activation coefficients may be inferred in asecond step if necessary. We proceed to study in greater depth the properties of this estimation process. In particular, two scenarios are considered. In the first one, we assume the independence of the activation coefficients sample-wise. Previous experimental work showed that dictionarieslearned with this approach exhibited a tendency to automatically regularize the number of components, a favorable property which was left unexplained. In the second one, we lift thisstandard assumption, and consider instead Markov structures to add statistical correlation to themodel, in order to better analyze temporal data
APA, Harvard, Vancouver, ISO, and other styles
18

Berberovic, Adnan, and Alexander Eriksson. "A Multi-Factor Stock Market Model with Regime-Switches, Student's T Margins, and Copula Dependencies." Thesis, Linköpings universitet, Produktionsekonomi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-143715.

Full text
Abstract:
Investors constantly seek information that provides an edge over the market. One of the conventional methods is to find factors which can predict asset returns. In this study we improve the Fama and French Five-Factor model with Regime-Switches, student's t distributions and copula dependencies. We also add price momentum as a sixth factor and add a one-day lag to the factors. The Regime-Switches are obtained from a Hidden Markov Model with conditional Student's t distributions. For the return process we use factor data as input, Student's t distributed residuals, and Student's t copula dependencies. To fit the copulas, we develop a novel approach based on the Expectation-Maximisation algorithm. The results are promising as the quantiles for most of the portfolios show a good fit to the theoretical quantiles. Using a sophisticated Stochastic Programming model, we back-test the predictive power over a 26 year period out-of-sample. Furthermore we analyse the performance of different factors during different market regimes.
APA, Harvard, Vancouver, ISO, and other styles
19

Ieria, Alessandro. "Valutazione sperimentale delle prestazioni di un sistema di smart metering a 169 Mhz in ambiente urbano." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/8208/.

Full text
Abstract:
Sono state valutate sperimentalmente le prestazioni di tre diversi collegamenti di Smart Metering in tre ambienti (urbano, sub-urbano e urbano denso) alla frequenza di 169 MHz. Grazie al calcolo del valore di Building Penetration Loss e dei valori di Path Loss è stato possibile stabilire le distanze massime di collegamento accentratore-meter al variare della probabilità di copertura.
APA, Harvard, Vancouver, ISO, and other styles
20

Nikolaou, Nikolaos. "Cost-sensitive boosting : a unified approach." Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/costsensitive-boosting-a-unified-approach(ae9bb7bd-743e-40b8-b50f-eb59461d9d36).html.

Full text
Abstract:
In this thesis we provide a unifying framework for two decades of work in an area of Machine Learning known as cost-sensitive Boosting algorithms. This area is concerned with the fact that most real-world prediction problems are asymmetric, in the sense that different types of errors incur different costs. Adaptive Boosting (AdaBoost) is one of the most well-studied and utilised algorithms in the field of Machine Learning, with a rich theoretical depth as well as practical uptake across numerous industries. However, its inability to handle asymmetric tasks has been the subject of much criticism. As a result, numerous cost-sensitive modifications of the original algorithm have been proposed. Each of these has its own motivations, and its own claims to superiority. With a thorough analysis of the literature 1997-2016, we find 15 distinct cost-sensitive Boosting variants - discounting minor variations. We critique the literature using {\em four} powerful theoretical frameworks: Bayesian decision theory, the functional gradient descent view, margin theory, and probabilistic modelling. From each framework, we derive a set of properties which must be obeyed by boosting algorithms. We find that only 3 of the published Adaboost variants are consistent with the rules of all the frameworks - and even they require their outputs to be calibrated to achieve this. Experiments on 18 datasets, across 21 degrees of cost asymmetry, all support the hypothesis - showing that once calibrated, the three variants perform equivalently, outperforming all others. Our final recommendation - based on theoretical soundness, simplicity, flexibility and performance - is to use the original Adaboost algorithm albeit with a shifted decision threshold and calibrated probability estimates. The conclusion is that novel cost-sensitive boosting algorithms are unnecessary if proper calibration is applied to the original.
APA, Harvard, Vancouver, ISO, and other styles
21

Hee, Sonke. "Computational Bayesian techniques applied to cosmology." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/273346.

Full text
Abstract:
This thesis presents work around 3 themes: dark energy, gravitational waves and Bayesian inference. Both dark energy and gravitational wave physics are not yet well constrained. They present interesting challenges for Bayesian inference, which attempts to quantify our knowledge of the universe given our astrophysical data. A dark energy equation of state reconstruction analysis finds that the data favours the vacuum dark energy equation of state $w {=} -1$ model. Deviations from vacuum dark energy are shown to favour the super-negative ‘phantom’ dark energy regime of $w {< } -1$, but at low statistical significance. The constraining power of various datasets is quantified, finding that data constraints peak around redshift $z = 0.2$ due to baryonic acoustic oscillation and supernovae data constraints, whilst cosmic microwave background radiation and Lyman-$\alpha$ forest constraints are less significant. Specific models with a conformal time symmetry in the Friedmann equation and with an additional dark energy component are tested and shown to be competitive to the vacuum dark energy model by Bayesian model selection analysis: that they are not ruled out is believed to be largely due to poor data quality for deciding between existing models. Recent detections of gravitational waves by the LIGO collaboration enable the first gravitational wave tests of general relativity. An existing test in the literature is used and sped up significantly by a novel method developed in this thesis. The test computes posterior odds ratios, and the new method is shown to compute these accurately and efficiently. Compared to computing evidences, the method presented provides an approximate 100 times reduction in the number of likelihood calculations required to compute evidences at a given accuracy. Further testing may identify a significant advance in Bayesian model selection using nested sampling, as the method is completely general and straightforward to implement. We note that efficiency gains are not guaranteed and may be problem specific: further research is needed.
APA, Harvard, Vancouver, ISO, and other styles
22

Grabaskas, David. "Efficient Approaches to the Treatment of Uncertainty in Satisfying Regulatory Limits." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1345464067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

ARADIAN, Achod André. "Quelques problèmes de dynamique d'interfaces molles." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2001. http://tel.archives-ouvertes.fr/tel-00001386.

Full text
Abstract:
Ce travail de thèse, de nature théorique, présente quatre axes de recherche portant sur la dynamique d'interfaces molles. (1) Gouttes et films de liquide sur substrats poreux : Nous avons étudié la déformation d'une goutte soumise simultanément à une aspiration de liquide et à un ancrage de sa ligne de contact avec le substrat. Nous nous sommes aussi intéressés au problème de l'entraînement d'un film de liquide sur une surface poreuse tirée hors d'un bain : le film a une hauteur finie, que nous avons calculée, et présente une structure non-triviale à l'approche de la ligne de contact. (2) Réticulation et interdiffusion à l'interface entre deux polymères : La formation de joints entre deux pièces de polymère nécessite une bonne interdiffusion des chaînes, qui peut cependant être considérablement contrariée lorsqu'un agent réticulant est introduit (dans le but de renforcer le matériau final). Nous avons modélisé la compétition qui s'installe, en montrant qu'il existait un paramètre de contrôle simple permettant d'optimiser le système et en donnant des prédictions sur l'énergie d'adhésion attendue dans deux régimes-limites. (3) Ecoulements granulaires : Nous donnons le scénario analytique complet du déroulement d'une avalanche sur un tas de sable, en tenant compte de l'effet d'un profil de vitesse linéaire dans la couche roulante. Parmi les prédictions, nous avons trouvé que l'épaisseur maximale devait varier comme la racine carrée de la taille de l'empilement. (4) Dynamique de films de savon verticaux : Les propositions actuelles concernant le mécanisme du drainage de ces films font appel à des instabilités hydrodynamiques se développant au bord du film ("régénération marginale"). Nous avons cherché à en déterminer précisément l'état précurseur : de manière générique, le profil forme une zone de striction, dont nous avons calculé analytiquement les dimensions typiques ; celle-ci pourrait rassembler les caractéristiques nécessaires à l'émergence des instabilités.
APA, Harvard, Vancouver, ISO, and other styles
24

Laborde, Maxime. "Systèmes de particules en interaction, approche par flot de gradient dans l'espace de Wasserstein." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLED014/document.

Full text
Abstract:
Depuis l’article fondateur de Jordan, Kinderlehrer et Otto en 1998, il est bien connu qu’une large classe d’équations paraboliques peuvent être vues comme des flots de gradient dans l’espace de Wasserstein. Le but de cette thèse est d’étendre cette théorie à certaines équations et systèmes qui n’ont pas exactement une structure de flot de gradient. Les interactions étudiées sont de différentes natures. Le premier chapitre traite des systèmes avec des interactions non locales dans la dérive. Nous étudions ensuite des systèmes de diffusions croisées s’appliquant aux modèles de congestion pour plusieurs populations. Un autre modèle étudié est celui où le couplage se trouve dans le terme de réaction comme les systèmes proie-prédateur avec diffusion ou encore les modèles de croissance tumorale. Nous étudierons enfin des systèmes de type nouveau où l’interaction est donnée par un problème de transport multi-marges. Une grande partie de ces problèmes est illustrée de simulations numériques
Since 1998 and the seminal work of Jordan, Kinderlehrer and Otto, it is well known that a large class of parabolic equations can be seen as gradient flows in the Wasserstein space. This thesis is devoted to extensions of this theory to equations and systems which do not have exactly a gradient flow structure. We study different kind of couplings. First, we treat the case of nonlocal interactions in the drift. Then, we study cross diffusion systems which model congestion for several species. We are also interested in reaction-diffusion systems as diffusive prey-predator systems or tumor growth models. Finally, we introduce a new class of systems where the interaction is given by a multi-marginal transport problem. In many cases, we give numerical simulations to illustrate our theorical results
APA, Harvard, Vancouver, ISO, and other styles
25

Vestin, Albin, and Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Full text
Abstract:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
APA, Harvard, Vancouver, ISO, and other styles
26

Davis, Brett Andrew. "Inference for Discrete Time Stochastic Processes using Aggregated Survey Data." Phd thesis, 2003. http://hdl.handle.net/1885/46631.

Full text
Abstract:
We consider a longitudinal system in which transitions between the states are governed by a discrete time finite state space stochastic process X. Our aim, using aggregated sample survey data of the form typically collected by official statistical agencies, is to undertake model based inference for the underlying process X. We will develop inferential techniques for continuing sample surveys of two distinct types. First, longitudinal surveys in which the same individuals are sampled in each cycle of the survey. Second, cross-sectional surveys which sample the same population in successive cycles but with no attempt to track particular individuals from one cycle to the next. Some of the basic results have appeared in Davis et al (2001) and Davis et al (2002).¶ ...
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography