Thèses sur le sujet « Non-parametric and semiparametric model »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Non-parametric and semiparametric model.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Non-parametric and semiparametric model ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Yan, Boping. « Double kernel non-parametric estimation in semiparametric econometric models ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ35817.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Mays, James Edward. « Model robust regression : combining parametric, nonparametric, and semiparametric methods ». Diss., Virginia Polytechnic Institute and State University, 1995. http://hdl.handle.net/10919/49937.

Texte intégral
Résumé :
In obtaining a regression fit to a set of data, ordinary least squares regression depends directly on the parametric model formulated by the researcher. If this model is incorrect, a least squares analysis may be misleading. Alternatively, nonparametric regression (kernel or local polynomial regression, for example) has no dependence on an underlying parametric model, but instead depends entirely on the distances between regressor coordinates and the prediction point of interest. This procedure avoids the necessity of a reliable model, but in using no information from the researcher, may fit to irregular patterns in the data. The proper combination of these two regression procedures can overcome their respective problems. Considered is the situation where the researcher has an idea of which model should explain the behavior of the data, but this model is not adequate throughout the entire range of the data. An extension of partial linear regression and two methods of model robust regression are developed and compared in this context. These methods involve parametric fits to the data and nonparametric fits to either the data or residuals. The two fits are then combined in the most efficient proportions via a mixing parameter. Performance is based on bias and variance considerations.
Ph. D.
incomplete_metadata
Styles APA, Harvard, Vancouver, ISO, etc.
3

Zhang, Tianyang. « Partly parametric generalized additive model ». Diss., University of Iowa, 2010. https://ir.uiowa.edu/etd/913.

Texte intégral
Résumé :
In many scientific studies, the response variable bears a generalized nonlinear regression relationship with a certain covariate of interest, which may, however, be confounded by other covariates with unknown functional form. We propose a new class of models, the partly parametric generalized additive model (PPGAM) for doing generalized nonlinear regression with the confounding covariate effects adjusted nonparametrically. To avoid the curse of dimensionality, the PPGAM specifies that, conditional on the covariates, the response distribution belongs to the exponential family with the mean linked to an additive predictor comprising a nonlinear parametric function that is of main interest, plus additive, smooth functions of other covariates. The PPGAM extends both the generalized additive model (GAM) and the generalized nonlinear regression model. We propose to estimate a PPGAM by the method of penalized likelihood. We derive some asymptotic properties of the penalized likelihood estimator, including consistency and asymptotic normality of the parametric estimator of the nonlinear regression component. We propose a model selection criterion for the PPGAM, which resembles the BIC. We illustrate the new methodologies by simulations and real applications. We have developed an R package PPGAM that implements the methodologies expounded herein.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Song, Rui, Shikai Luo, Donglin Zeng, Hao Helen Zhang, Wenbin Lu et Zhiguo Li. « Semiparametric single-index model for estimating optimal individualized treatment strategy ». INST MATHEMATICAL STATISTICS, 2017. http://hdl.handle.net/10150/625783.

Texte intégral
Résumé :
Different from the standard treatment discovery framework which is used for finding single treatments for a homogenous group of patients, personalized medicine involves finding therapies that are tailored to each individual in a heterogeneous group. In this paper, we propose a new semiparametric additive single-index model for estimating individualized treatment strategy. The model assumes a flexible and nonparametric link function for the interaction between treatment and predictive covariates. We estimate the rule via monotone B-splines and establish the asymptotic properties of the estimators. Both simulations and an real data application demonstrate that the proposed method has a competitive performance.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Starnes, Brett Alden. « Asymptotic Results for Model Robust Regression ». Diss., Virginia Tech, 1999. http://hdl.handle.net/10919/30244.

Texte intégral
Résumé :
Since the mid 1980's many statisticians have studied methods for combining parametric and nonparametric esimates to improve the quality of fits in a regression problem. Notably in 1987, Einsporn and Birch proposed the Model Robust Regression estimate (MRR1) in which estimates of the parametric function, f, and the nonparametric function, g, were combined in a straightforward fashion via the use of a mixing parameter, l. This technique was studied extensively at small samples and was shown to be quite effective at modeling various unusual functions. In 1995, Mays and Birch developed the MRR2 estimate as an alternative to MRR1. This model involved first forming the parametric fit to the data, and then adding in an estimate of g according to the lack of fit demonstrated by the error terms. Using small samples, they illustrated the superiority of MRR2 to MRR1 in most situations. In this dissertation we have developed asymptotic convergence rates for both MRR1 and MRR2 in OLS and GLS (maximum likelihood) settings. In many of these settings, it is demonstrated that the user of MRR1 or MRR2 achieves the best convergence rates available regardless of whether or not the model is properly specified. This is the "Golden Result of Model Robust Regression". It turns out that the selection of the mixing parameter is paramount in determining whether or not this result is attained.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Abdel-Salam, Abdel-Salam Gomaa. « Profile Monitoring with Fixed and Random Effects using Nonparametric and Semiparametric Methods ». Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/29387.

Texte intégral
Résumé :
Profile monitoring is a relatively new approach in quality control best used where the process data follow a profile (or curve) at each time period. The essential idea for profile monitoring is to model the profile via some parametric, nonparametric, and semiparametric methods and then monitor the fitted profiles or the estimated random effects over time to determine if there have been changes in the profiles. The majority of previous studies in profile monitoring focused on the parametric modeling of either linear or nonlinear profiles, with both fixed and random effects, under the assumption of correct model specification. Our work considers those cases where the parametric model for the family of profiles is unknown or at least uncertain. Consequently, we consider monitoring profiles via two techniques, a nonparametric technique and a semiparametric procedure that combines both parametric and nonparametric profile fits, a procedure we refer to as model robust profile monitoring (MRPM). Also, we incorporate a mixed model approach to both the parametric and nonparametric model fits. For the mixed effects models, the MMRPM method is an extension of the MRPM method which incorporates a mixed model approach to both parametric and nonparametric model fits to account for the correlation within profiles and to deal with the collection of profiles as a random sample from a common population. For each case, we formulated two Hotelling's T 2 statistics, one based on the estimated random effects and one based on the fitted values, and obtained the corresponding control limits. In addition,we used two different formulas for the estimated variancecovariance matrix: one based on the pooled sample variance-covariance matrix estimator and a second one based on the estimated variance-covariance matrix based on successive differences. A Monte Carlo study was performed to compare the integrated mean square errors (IMSE) and the probability of signal of the parametric, nonparametric, and semiparametric approaches. Both correlated and uncorrelated errors structure scenarios were evaluated for varying amounts of model misspecification, number of profiles, number of observations per profile, shift location, and in- and out-of-control situations. The semiparametric (MMRPM) method for uncorrelated and correlated scenarios was competitive and, often, clearly superior with the parametric and nonparametric over all levels of misspecification. For a correctly specified model, the IMSE and the simulated probability of signal for the parametric and theMMRPM methods were identical (or nearly so). For the severe modelmisspecification case, the nonparametric andMMRPM methods were identical (or nearly so). For the mild model misspecification case, the MMRPM method was superior to the parametric and nonparametric methods. Therefore, this simulation supports the claim that the MMRPM method is robust to model misspecification. In addition, the MMRPM method performed better for data sets with correlated error structure. Also, the performances of the nonparametric and MMRPM methods improved as the number of observations per profile increases since more observations over the same range of X generally enables more knots to be used by the penalized spline method, resulting in greater flexibility and improved fits in the nonparametric curves and consequently, the semiparametric curves. The parametric, nonparametric and semiparametric approaches were utilized for fitting the relationship between torque produced by an engine and engine speed in the automotive industry. Then, we used a Hotelling's T 2 statistic based on the estimated random effects to conduct Phase I studies to determine the outlying profiles. The parametric, nonparametric and seminonparametric methods showed that the process was stable. Despite the fact that all three methods reach the same conclusion regarding the –in-control– status of each profile, the nonparametric and MMRPM results provide a better description of the actual behavior of each profile. Thus, the nonparametric and MMRPM methods give the user greater ability to properly interpret the true relationship between engine speed and torque for this type of engine and an increased likelihood of detecting unusual engines in future production. Finally, we conclude that the nonparametric and semiparametric approaches performed better than the parametric approach when the user's model is misspecified. The case study demonstrates that, the proposed nonparametric and semiparametric methods are shown to be more efficient, flexible and robust to model misspecification for Phase I profile monitoring in a practical application. Thus, our methods are robust to the common problem of model misspecification. We also found that both the nonparametric and the semiparametric methods result in charts with good abilities to detect changes in Phase I data, and in charts with easily calculated control limits. The proposed methods provide greater flexibility and efficiency than current parametric methods used in profile monitoring for Phase I that rely on correct model specification, an unrealistic situation in many practical problems in industrial applications.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Margevicius, Seunghee P. « Modeling of High-Dimensional Clinical Longitudinal Oxygenation Data from Retinopathy of Prematurity ». Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case1523022165691473.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Bijou, Mohammed. « Qualité de l'éducation, taille des classes et mixité sociale : Un réexamen à partir des méthodes à variables instrumentales et semi-paramétriques sur données multiniveaux - Cas du Maroc - ». Electronic Thesis or Diss., Toulon, 2021. http://www.theses.fr/2021TOUL2004.

Texte intégral
Résumé :
L’objectif de ce travail est d’évaluer la qualité du système éducatif marocain à partir des données du programme TIMSS et PIRLS 2011. Le travail s’articule autour de trois chapitres. Il s’agit d’étudier, dans le premier chapitre, l’influence des caractéristiques individuelles de l’élève et de l’école sur les performances scolaires, ainsi que le rôle important de l'environnement scolaire (effet taille et composition sociale). Dans le deuxième chapitre, nous cherchons à estimer la taille de classe optimale qui assure une réussite généralisée de tous les élèves des deux niveaux à savoir, la 4e année primaire et la 2e année collégiale. Le troisième chapitre propose d’étudier la relation existante entre la composition sociale et économique de l’établissement et la performance scolaire, tout en démontrant le rôle de la mixité sociale dans la réussite des élèves. Pour ce faire, nous avons utilisé différentes approches économétriques, à savoir une modélisation multiniveau avec correction du problème de l’endogénéité (chapitre 1), un modèle semi-paramétrique hiérarchique dans le (chapitre 2) et un modèle semi paramétrique hiérarchique contextuel (chapitre 3). Les résultats montrent que la performance scolaire est déterminée par plusieurs facteurs intrinsèques à l'élève et également contextuels. En effet, une taille de classe moins chargée et une école à composition sociale mixte sont les deux éléments essentiels pour un environnement favorable et un apprentissage assuré pour l’ensemble des élèves. Selon nos résultats, les pouvoirs publics devraient accorder la priorité à la réduction de la taille des classes en la limitant à 27 élèves au maximum. De plus, il est nécessaire d’envisager un assouplissement de la carte scolaire afin de favoriser la mixité sociale à l’école. Les résultats obtenus permettent une meilleure compréhension du système scolaire marocain, dans son aspect qualitatif et la justification des politiques éducatives pertinentes pour améliorer la qualité du système éducatif marocain
This thesis objective is to examine the quality of the Moroccan education system exploiting the data of the programs TIMSS and PIRLS 2011.The thesis is structured around three chapters. The first chapter examines the influence of individual student and school characteristics on school performance, as well as the important role of the school environment (effect of size and social composition). In the second chapter, we seek to estimate the optimal class size that ensures widespread success for all students at both levels, namely, the fourth year of primary school and the second year of college. The third chapter proposes to study the relationship between the social and economic composition of the school and academic performance, while demonstrating the role of social mix in student success. In order to study this relationship, we mobilize different econometric approaches, by applying a multilevel model with correction for the problem of endogeneity (chapter 1), a hierarchical semi-parametric model (chapter 2) and a contextual hierarchical semi-parametric model (chapter 3). The results show that academic performance is determined by several factors that are intrinsic to the student and also contextual. Indeed, a smaller class size and a school with a mixed social composition are the two essential elements for a favourable environment and assured learning for all students. According to our results, governments should give priority to reducing class size by limiting it to a maximum of 27 students. In addition, it is necessary to consider making the school map more flexible in order to promote social mixing at school. The results obtained allow a better understanding of the Moroccan school system, in its qualitative aspect and the justification of relevant educational policies to improve the quality of the Moroccan education system
Styles APA, Harvard, Vancouver, ISO, etc.
9

Avramidis, Panagiotis. « Estimation of the volatility function : non-parametric and semiparametric approaches ». Thesis, London School of Economics and Political Science (University of London), 2004. http://etheses.lse.ac.uk/1793/.

Texte intégral
Résumé :
We investigate two problems in modelling time series data that exhibit conditional heteroscedasticity. The first part deals with the local maximum likelihood estimation of volatility functions which are in the form of conditional variance functions. The existing estimation procedures yield plausible results. Yet, they often fail to take into account special features of the data at the cost of reduced accuracy of prediction. More precisely, many of the parametric and nonparametric conditional variance models ignore the fact that the error distribution departs significantly from gaussian distribution. We propose a novel nonparametric estimation procedure that replaces popular local least squares method with local maximum likelihood estimation. Intuitively, using information from the error distribution improves the estimators and therefore increases the accuracy in prediction. This conclusion is proved theoretically and illustrated by numerical examples. In addition, we show that the proposed estimator adapts asymptotically to the error distribution as well as to the mean regression function. Applications with real data examples demonstrate the potential use of the adaptive maximum likelihood estimator in financial risk management. The second part deals with the variable selection for a particular class of semipara-metric models known as the partial linear models. The existing selection methods are computationally demanding. The proposed selection procedure is computationally more efficient. In particular, if P and Q are the number of linear and nonparametric candidate regressors, respectively, then the proposed procedure reduces the order of the number of variable subsets to be investigated from 2 Q+P to 2Q + 2 P. At the same time, it maintains all the good properties of existing methods, such as consistency. The latter is proven theoretically and confirmed numerically by simulated examples. The results are presented for the mean regression function while the generalization to the conditional variance function is discussed separately.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Tachet, des combes Rémi. « Non-parametric model calibration in finance ». Phd thesis, Ecole Centrale Paris, 2011. http://tel.archives-ouvertes.fr/tel-00658766.

Texte intégral
Résumé :
Consistently fitting vanilla option surfaces is an important issue when it comes to modelling in finance. In three different models: local and stochastic volatility, local correlation and hybrid local volatility with stochstic rates, this calibration boils down to the resolution of a nonlinear partial integro-differential equation. In a first part, we give existence results of solutions for the calibration equation. They are based upon fixed point methods in Hölder spaces and short-time a priori estimates. We then apply those existence results to the three models previously mentioned and give the calibration obtained when solving the pde numerically. At last, we focus on the algorithm used for the resolution: an ADI predictor/corrector scheme that needs to be modified to take into account the nonlinear term. We also study an instability phenomenon that occurs in certain cases for the local and stochastic volatility model. Using Hadamard's theory, we try to offer a theoretical explanation to the instability
Styles APA, Harvard, Vancouver, ISO, etc.
11

Martinez-Sanchis, Elena. « Essays on identification and estimation of structural parametric and semiparametric models in microeconomics ». Thesis, University College London (University of London), 2005. http://discovery.ucl.ac.uk/1444804/.

Texte intégral
Résumé :
This thesis focuses on identification and estimation of structural parametric and semi-parametric models in microeconometrics. The analysis of the conditions under which in the context of an econometric model-data can be informative about the parameters of interest of an economic process is essential and must be of high priority in any econometric work. When considering models with which to identify interesting features, emphasis should be placed on imposing the minimum set of restrictions in order to achieve identification, since inappropriate restrictions may lead to inconsistent estimates of the parameters of interest. For this reason in the literature one finds that some attention has been paid to relaxing parametric distributional assumptions on the unobservables or functional forms of the relationships between observables and unobservables. To begin with, I examine how the parameters of interest of a general class of models can be identified and then estimated when not all of the relevant variables are jointly observed in the same dataset. To do so, the existence of an additional data set with information on both the missing variables and on some common variables in the original data set is necessary. I then move on to an analysis of the identification of the preference parameters in a discrete choice demand model in which individuals only derive utility from the characteristics of the goods they consume. I discuss how this particular model makes the estimation of these parameters feasible without imposing distributional assumptions in the errors even if the number of goods in the choice set is very large. Finally, I consider the comparison of nonparametric regression curves between different samples. I propose to estimate the parameters that explain these differences between the conditional mean functions by using an estimator developed in the semiparametric literature which avoids the computational problems faced by the previously proposed estimators.
Styles APA, Harvard, Vancouver, ISO, etc.
12

McNeney, William Bradley. « Asymptotic efficiency in semiparametric models with non-i.i.d. data / ». Thesis, Connect to this title online ; UW restricted, 1998. http://hdl.handle.net/1773/9604.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
13

Bartcus, Marius. « Bayesian non-parametric parsimonious mixtures for model-based clustering ». Thesis, Toulon, 2015. http://www.theses.fr/2015TOUL0010/document.

Texte intégral
Résumé :
Cette thèse porte sur l’apprentissage statistique et l’analyse de données multi-dimensionnelles. Elle se focalise particulièrement sur l’apprentissage non supervisé de modèles génératifs pour la classification automatique. Nous étudions les modèles de mélanges Gaussians, aussi bien dans le contexte d’estimation par maximum de vraisemblance via l’algorithme EM, que dans le contexte Bayésien d’estimation par Maximum A Posteriori via des techniques d’échantillonnage par Monte Carlo. Nous considérons principalement les modèles de mélange parcimonieux qui reposent sur une décomposition spectrale de la matrice de covariance et qui offre un cadre flexible notamment pour les problèmes de classification en grande dimension. Ensuite, nous investiguons les mélanges Bayésiens non-paramétriques qui se basent sur des processus généraux flexibles comme le processus de Dirichlet et le Processus du Restaurant Chinois. Cette formulation non-paramétrique des modèles est pertinente aussi bien pour l’apprentissage du modèle, que pour la question difficile du choix de modèle. Nous proposons de nouveaux modèles de mélanges Bayésiens non-paramétriques parcimonieux et dérivons une technique d’échantillonnage par Monte Carlo dans laquelle le modèle de mélange et son nombre de composantes sont appris simultanément à partir des données. La sélection de la structure du modèle est effectuée en utilisant le facteur de Bayes. Ces modèles, par leur formulation non-paramétrique et parcimonieuse, sont utiles pour les problèmes d’analyse de masses de données lorsque le nombre de classe est indéterminé et augmente avec les données, et lorsque la dimension est grande. Les modèles proposés validés sur des données simulées et des jeux de données réelles standard. Ensuite, ils sont appliqués sur un problème réel difficile de structuration automatique de données bioacoustiques complexes issues de signaux de chant de baleine. Enfin, nous ouvrons des perspectives Markoviennes via les processus de Dirichlet hiérarchiques pour les modèles Markov cachés
This thesis focuses on statistical learning and multi-dimensional data analysis. It particularly focuses on unsupervised learning of generative models for model-based clustering. We study the Gaussians mixture models, in the context of maximum likelihood estimation via the EM algorithm, as well as in the Bayesian estimation context by maximum a posteriori via Markov Chain Monte Carlo (MCMC) sampling techniques. We mainly consider the parsimonious mixture models which are based on a spectral decomposition of the covariance matrix and provide a flexible framework particularly for the analysis of high-dimensional data. Then, we investigate non-parametric Bayesian mixtures which are based on general flexible processes such as the Dirichlet process and the Chinese Restaurant Process. This non-parametric model formulation is relevant for both learning the model, as well for dealing with the issue of model selection. We propose new Bayesian non-parametric parsimonious mixtures and derive a MCMC sampling technique where the mixture model and the number of mixture components are simultaneously learned from the data. The selection of the model structure is performed by using Bayes Factors. These models, by their non-parametric and sparse formulation, are useful for the analysis of large data sets when the number of classes is undetermined and increases with the data, and when the dimension is high. The models are validated on simulated data and standard real data sets. Then, they are applied to a real difficult problem of automatic structuring of complex bioacoustic data issued from whale song signals. Finally, we open Markovian perspectives via hierarchical Dirichlet processes hidden Markov models
Styles APA, Harvard, Vancouver, ISO, etc.
14

PINTO, MARIANA DA PAIXAO. « A MIXED PARAMETRIC AND NON PARAMETRIC INTERNAL MODEL TO UNDERWRITING RISK FOR LIFE INSURANCE ». PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2009. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=31951@1.

Texte intégral
Résumé :
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Com as falências ocorridas nas últimas décadas, no setor de seguros, um movimento surgiu para desenvolver modelos matemáticos capazes de ajudar no gerenciamento do risco, os chamados modelos internos. No Brasil, a SUSEP, seguindo a tendência mundial, exigiu que as empresas, interessadas em atuar no país, utilizassem um modelo interno para risco de subscrição. Com isto, obter um modelo interno tornou-se primordial para as empresas seguradoras no país. O modelo proposto neste trabalho ilustrado para seguro de vida para risco de subscrição se baseia em Cadeias de Markov, no Teorema Central do Limite, parte paramétrica, e na Simulação de Monte Carlo, parte não paramétrica. Em sua estrutura foi considerada a dependência entre titular e dependentes. Uma aplicação a dados reais mascarados foi feita para analisar o modelo. O capital mínimo requerido calculado utilizando o método híbrido foi comparado com o valor obtido utilizando somente o método paramétrico. Em seguida foi feita a análise de sensibilidade do modelo.
The bankruptcies occurred in recent decades in the insurance sector, a movement arose to develop mathematical models capable of assisting in the management of risk, called internal models. In Brazil, the SUSEP, following the worldwide trend, demanded that the companies, interested in working in the country, using an internal model for underwriting risk. Because of this, developing an internal model has become vital for insurance companies in the country. The proposed model in this work illustrated to life insurance for the underwriting risk was based on the Markov chains, on the Central Limit Theorem to the parametric method, and Monte Carlo Simulation to the non-parametric method. In its structure, the dependence between the holder and dependents was considered. An application to masked real data was made to analyze the model. The minimum required capital calculated using the hybrid method was compared with the value obtained using only the parametric method. Then the sensitivities of the model were investigated.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Hoare, Armando. « Parametric, non-parametric and statistical modeling of stony coral reef data ». [Tampa, Fla] : University of South Florida, 2008. http://purl.fcla.edu/usf/dc/et/SFE0002470.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
16

Javier, Hidalgo Moreno Francisco. « Estimation of semiparametric econometric time-series models with non-linear or heteroscedastic disturbances ». Thesis, London School of Economics and Political Science (University of London), 1990. http://etheses.lse.ac.uk/2581/.

Texte intégral
Résumé :
This thesis proposes and justifies parameter estimates in two semiparametric models for economic time series. In both models the parametric component consists of a linear regression model. The nonparametric aspect consists of relevant features of the distribution function of the disturbances. In the first model the disturbances follow a possibly non-linear autoregressive model, with autoregression function of unknown form. In the second model the disturbances are both linearly serially correlated and heteroscedastic, the serial correlation and heteroscedasticity being of unknown form. For both models estimates of the regression coefficients of generalized least squares type are proposed, and shown to have the same limiting distribution as estimates based on correct parameterization of the relevant features of the disturbances. Monte-Carlo simulation evidence of the finite sample performance of both estimates is reported.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Heinz, Daniel. « Hyper Markov Non-Parametric Processes for Mixture Modeling and Model Selection ». Research Showcase @ CMU, 2010. http://repository.cmu.edu/dissertations/11.

Texte intégral
Résumé :
Markov distributions describe multivariate data with conditional independence structures. Dawid and Lauritzen (1993) extended this idea to hyper Markov laws for prior distributions. A hyper Markov law is a distribution over Markov distributions whose marginals satisfy the same conditional independence constraints. These laws have been used for Gaussian mixtures (Escobar, 1994; Escobar and West, 1995) and contingency tables (Liu and Massam, 2006; Dobra and Massam, 2009). In this paper, we develop a family of non-parametric hyper Markov laws that we call hyper Dirichlet processes, combining the ideas of hyper Markov laws and non-parametric processes. Hyper Dirichlet processes are joint laws with Dirichlet process laws for particular marginals. We also describe a more general class of Dirichlet processes that are not hyper Markov, but still contain useful properties for describing graphical data. The graphical Dirichlet processes are simple Dirichlet processes with a hyper Markov base measure. This class allows an extremely straight-forward application of existing Dirichlet knowledge and technology to graphical settings. Given the wide-spread use of Dirichlet processes, there are many applications of this framework waiting to be explored. One broad class of applications, known as Dirichlet process mixtures, has been used for constructing mixture densities such that the underlying number of components may be determined by the data (Lo, 1984; Escobar, 1994; Escobar and West, 1995). I consider the use of the new graphical Dirichlet process in this setting, which imparts a conditional independence structure inside each component. In other words, given the component or cluster membership, the data exhibit the desired independence structure. We discuss two applications. Expanding on the work of Escobar and West (1995), we estimate a non-parametric mixture of Markov Gaussians using a Gibbs sampler. Secondly, we employ the Mode-Oriented Stochastic Search of Dobra and Massam (2009) for determining a suitable conditional independence model, focusing on contingency tables. In general, the mixing induced by a Dirichlet process does not drastically increase the complexity beyond that of a simpler Bayesian hierarchical models sans mixture components. We provide a specific representation for decomposable graphs with useful algorithms for local updates.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Schildcrout, Jonathan Scott. « Marginal modeling of longitudinal, binary response data : semiparametric and parametric estimation with long response series and an efficient outcome dependent sampling design / ». Thesis, Connect to this title online ; UW restricted, 2004. http://hdl.handle.net/1773/9540.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
19

Joshi, Niranjan Bhaskar. « Non-parametric probability density function estimation for medical images ». Thesis, University of Oxford, 2008. http://ora.ox.ac.uk/objects/uuid:ebc6af07-770b-4fee-9dc9-5ebbe452a0c1.

Texte intégral
Résumé :
The estimation of probability density functions (PDF) of intensity values plays an important role in medical image analysis. Non-parametric PDF estimation methods have the advantage of generality in their application. The two most popular estimators in image analysis methods to perform the non-parametric PDF estimation task are the histogram and the kernel density estimator. But these popular estimators crucially need to be ‘tuned’ by setting a number of parameters and may be either computationally inefficient or need a large amount of training data. In this thesis, we critically analyse and further develop a recently proposed non-parametric PDF estimation method for signals, called the NP windows method. We propose three new algorithms to compute PDF estimates using the NP windows method. One of these algorithms, called the log-basis algorithm, provides an easier and faster way to compute the NP windows estimate, and allows us to compare the NP windows method with the two existing popular estimators. Results show that the NP windows method is fast and can estimate PDFs with a significantly smaller amount of training data. Moreover, it does not require any additional parameter settings. To demonstrate utility of the NP windows method in image analysis we consider its application to image segmentation. To do this, we first describe the distribution of intensity values in the image with a mixture of non-parametric distributions. We estimate these distributions using the NP windows method. We then use this novel mixture model to evolve curves with the well-known level set framework for image segmentation. We also take into account the partial volume effect that assumes importance in medical image analysis methods. In the final part of the thesis, we apply our non-parametric mixture model (NPMM) based level set segmentation framework to segment colorectal MR images. The segmentation of colorectal MR images is made challenging due to sparsity and ambiguity of features, presence of various artifacts, and complex anatomy of the region. We propose to use the monogenic signal (local energy, phase, and orientation) to overcome the first difficulty, and the NPMM to overcome the remaining two. Results are improved substantially on those that have been reported previously. We also present various ways to visualise clinically useful information obtained with our segmentations in a 3-dimensional manner.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Ren, Yan. « A Non-parametric Bayesian Method for Hierarchical Clustering of Longitudinal Data ». University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1337085531.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
21

MONTEIRO, ANDRE MONTEIRO DALMEIDA. « NON-PARAMETRIC ESTIMATIONS OF INTEREST RATE CURVES : MODEL SELECTION CRITERION : MODEL SELECTION CRITERIONPERFORMANCE DETERMINANT FACTORS AND BID-ASK S ». PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2002. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=2684@1.

Texte intégral
Résumé :
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
Esta tese investiga a estimação de curvas de juros sob o ponto de vista de métodos não-paramétricos. O texto está dividido em dois blocos. O primeiro investiga a questão do critério utilizado para selecionar o método de melhor desempenho na tarefa de interpolar a curva de juros brasileira em uma dada amostra. Foi proposto um critério de seleção de método baseado em estratégias de re-amostragem do tipo leave-k-out cross validation, onde K k £ £ 1 e K é função do número de contratos observados a cada curva da amostra. Especificidades do problema reduzem o esforço computacional requerido, tornando o critério factível. A amostra tem freqüência diária: janeiro de 1997 a fevereiro de 2001. O critério proposto apontou o spline cúbico natural -utilizado com método de ajuste perfeito aos dados - como o método de melhor desempenho. Considerando a precisão de negociação, este spline mostrou-se não viesado. A análise quantitativa de seu desempenho identificou, contudo, heterocedasticidades nos erros simulados. A partir da especificação da variância condicional destes erros e de algumas hipóteses, foi proposto um esquema de intervalo de segurança para a estimação de taxas de juros pelo spline cúbico natural, empregado como método de ajuste perfeito aos dados. O backtest sugere que o esquema proposto é consistente, acomodando bem as hipóteses e aproximações envolvidas. O segundo bloco investiga a estimação da curva de juros norte-americana construída a partir dos contratos de swaps de taxas de juros dólar-Libor pela Máquina de Vetores Suporte (MVS), parte do corpo da Teoria do Aprendizado Estatístico. A pesquisa em MVS tem obtido importantes avanços teóricos, embora ainda sejam escassas as implementações em problemas reais de regressão. A MVS possui características atrativas para a modelagem de curva de juros: é capaz de introduzir já na estimação informações a priori sobre o formato da curva e sobre aspectos da formação das taxas e liquidez de cada um dos contratos a partir dos quais ela é construída. Estas últimas são quantificadas pelo bid-ask spread (BAS) de cada contrato. A formulação básica da MVS é alterada para assimilar diferentes valores do BAS sem que as propriedades dela sejam perdidas. É dada especial atenção ao levantamento de informação a priori para seleção dos parâmetros da MVS a partir do formato típico da curva. A amostra tem freqüência diária: março de 1997 a abril de 2001. Os desempenhos fora da amostra de diversas especificações da MVS foram confrontados com aqueles de outros métodos de estimação. A MVS foi o método que melhor controlou o trade- off entre viés e variância dos erros.
This thesis investigates interest rates curve estimation under non-parametric approach. The text is divided into two parts. The first one focus on which criterion to use to select the best performance method in the task of interpolating Brazilian interest rate curve. A selection criterion is proposed to measure out-of-sample performance by combining resample strategies leave-k-out cross validation applied upon the whole sample curves, where K k £ £ 1 and K is function of observed contract number in each curve. Some particularities reduce substantially the required computational effort, making the proposed criterion feasible. The data sample range is daily, from January 1997 to February 2001. The proposed criterion selected natural cubic spline, used as data perfect-fitting estimation method. Considering the trade rate precision, the spline is non-biased. However, quantitative analysis of performance determinant factors showed the existence of out-of-sample error heteroskedasticities. From a conditional variance specification of these errors, a security interval scheme is proposed for interest rate generated by perfect-fitting natural cubic spline. A backtest showed that the proposed security interval is consistent, accommodating the evolved assumptions and approximations. The second part estimate US free-for-floating interest rate swap contract curve by using Support Vector Machine (SVM), a method derived from Statistical Learning Theory. The SVM research has got important theoretical results, however the number of implementation on real regression problems is low. SVM has some attractive characteristics for interest rates curves modeling: it has the ability to introduce already in its estimation process a priori information about curve shape and about liquidity and price formation aspects of the contracts that generate the curve. The last information set is quantified by the bid-ask spread. The basic SVM formulation is changed in order to be able to incorporate the different values for bid-ask spreads, without losing its properties. Great attention is given to the question of how to extract a priori information from swap curve typical shape to be used in MVS parameter selection. The data sample range is daily, from March 1997 to April 2001. The out-of-sample performances of different SVM specifications are faced with others method performances. SVM got the better control of trade- off between bias and variance of out-of-sample errors.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Remella, Siva Rama Karthik. « Steady State Mathematical Modeling of Non-Conventional Loop Heat Pipes : A Parametric and a Design Approach ». University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1353154991.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Brandt, James M. « A parametric cost model for estimating operating and support costs of US Navy (Non-Nuclear) surface ships / ». Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1999. http://handle.dtic.mil/100.2/ADA363539.

Texte intégral
Résumé :
Thesis (M.S. in Operations Research) Naval Postgraduate School, June 1999.
"June 1999". Thesis advisor(s): Timothy P. Anderson, Samuel E. Buttrey. Includes bibliographical references (p. 171). Also avaliable online.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Rydén, Patrik. « Estimation of the reliability of systems described by the Daniels Load-Sharing Model ». Licentiate thesis, Umeå universitet, Matematisk statistik, 1999. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-46724.

Texte intégral
Résumé :
We consider the problem of estimating the failure stresses of bundles (i.e. the tensile forces that destroy the bundles), constructed of several statisti-cally similar fibres, given a particular kind of censored data. Each bundle consists of several fibres which have their own independent identically dis-tributed failure stresses, and where the force applied on a bundle at any moment is distributed equally between the unbroken fibres in the bundle. A bundle with these properties is an example of an equal load-sharing sys-tem, often referred to as the Daniels failure model. The testing of several bundles generates a special kind of censored data, which is complexly struc-tured. Strongly consistent non-parametric estimators of the distribution laws of bundles are obtained by applying the theory of martingales, and by using the observed data. It is proved that random sampling, with replace-ment from the statistical data related to each tested bundle, can be used to obtain asymptotically correct estimators for the distribution functions of deviations of non-parametric estimators from true values. In the case when the failure stresses of the fibres are described by a Weibull distribution, we obtain strongly consistent parametric maximum likelihood estimators of the distribution functions of failure stresses of bundles, by using the complexly structured data. Numerical examples illustrate the behavior of the obtained estimators.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Lanhede, Daniel. « Non-parametric Statistical Process Control : Evaluation and Implementation of Methods for Statistical Process Control at GE Healthcare, Umeå ». Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-104512.

Texte intégral
Résumé :
Statistical process control (SPC) is a toolbox to detect changes in the output of a process distribution. It can serve as a valuable resource to maintain high quality in a manufacturing process. This report is based on the work on evaluating and implementing methods for SPC in the process of chromatography instrument manufacturing at GE Healthcare, Umeå. To handle low volume and non-normally distributed process output data, non-parametric methods are considered. Eight control charts, three for for Phase I analysis, and five for Phase II analysis, are evaluated in this study. The usability of the charts are assessed based on ease of interpretation and the performance to detect distributional changes. The later is evaluated with simulations. The result of the project is the implementation of the RS/P-chart, suggested by Capizzi et al (2013), for Phase I analysis. Of the considered Phase I methods (and simulation scenarios), the RS/P-chart has the highest overall probability, of detecting a variety of distributional changes. Further, the RS/P-chart is easily interpreted, facilitating the analysis. For Phase II analysis, the use of two control charts, one based on the Mann-Whitney U statistic, suggested by Chakraborti et al (2008), and one on the Mood test statistic for dispersion, suggested by Ghute et al (2014), have been implemented. These are chosen mainly based on the ease of interpretation. To reduce the detection time for changes in the process distribution, the change-point chart based on the Cramer Von Mises statistic, suggested by Ross et al (2012), could be used instead. Using single observations, instead of larger samples, this chart is updated more frequently. However, this efficiently increases the false alarm rate and the chart is also considered much more difficult to interpret for the SPC practitioner.
Statistisk processkontroll (SPC) är en samling verktyg för att upptäcka förändringar, i fördelningen, hos utfallen i en process. Det kan fungera som en värdefull resurs för att upprätthålla en hög kvalitet i en tillverkningsprocess. Denna rapport är baserad på arbetet med att utvärdera och implementera metoder för SPC i en monteringsprocess av kromatografiinstrument på GE Healthcare, Umeå. Åtta styrdiagram, tre för för fas I analys, och fem för fas II analys, studeras i denna rapport. Användbarheten hos styrdiagrammen bedöms efter hur enkla de är att tolka och förmågan att upptäcka fördelningsförändringar. Den senare utvärderas med simuleringar. Resultatet av projektet är införandet av RS/P-metod, utvecklad av Capizzi et al (2013), för analysen i fas I. Av de utvärderade metoderna, (och simuleringsscenarier), har RS/P-diagrammet den högsta övergripande sannolikheten, för att upptäcka en mängd olika fördelningsförändringar. Vidare är metodens grafiska diagram lätt att tolka, vilket underlättar analysen. För fas II analys, har två styrdiagram, ett baserat på Mann-Whitney's U teststatistika, som föreslagits av Chakraborti et al (2008), och ett på Mood's teststatistika för spridning, som föreslagits av Ghute et al (2014), implementerats. Styrkan i dessa styrdiagram ligger främst i dess enkla tolkning. För snabbare identifiering av processförändringar kan styrdiagrammet baserat på Cramer von Mises teststatistika, som föreslagits av Ross et al (2012), användas. Baserat på enskilda observationer, istället för stickprov, har styrdiagrammet en högre uppdateringsfrekvens. Detta leder dock till ett ökat antal falska larm och styrdiagrammet anses dessutom vara avsevärt mycket svårare att tolka för SPC-utövaren.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Deng, Chunqin. « Statistical Approach to Detect and Estimate Hormesis ». University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1004369636.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Dellagi, Hatem. « Estimations paramétrique et non paramétrique des données manquantes : application à l'agro-climatologie ». Paris 6, 1994. http://www.theses.fr/1994PA066546.

Texte intégral
Résumé :
Dans ce travail nous proposons deux méthodes d'estimation des données manquantes. Dans le cas de l'estimation paramétrique et afin de résoudre le problème par la prévision, nous exploitons l'estimateur décale (E. D) de la partie autorégressive d'un modèle ARMA scalaire pour estimer la matrice de covariance In dont la consistance forte est prouvée sous des conditions ayant l'avantage de s'exprimer en fonction des trajectoires et identifier les coefficients de la partie moyenne mobile et la variance du bruit blanc. En analyse des correspondances et afin d'estimer les données manquantes d'un tableau de correspondance, le problème se résout complètement dans le cas d'une seule donnée manquante. L'existence est prouvée dans le cas où il y en a plusieurs, par contre l'unicité étant délicate, une combinaison linéaire entre les données manquantes est obtenue à partir de la formule de la trace dont la minimisation assure l'homogénéité du tableau de correspondance, nous établirons sous le même critère la reconstitution d'une donnée d'origine à partir du codage linéaire par morceaux
Styles APA, Harvard, Vancouver, ISO, etc.
28

Pavão, André Luis. « Modelos de duração aplicados à sobrevivência das empresas paulistas entre 2003 e 2007 ». Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/12/12140/tde-24072013-154206/.

Texte intégral
Résumé :
Este trabalho apresenta as principais causas para a mortalidade das empresas paulistas criadas entre 2003 e 2007 a partir de base de dados cedida pelo SEBRAE-SP para o desenvolvimento dessa pesquisa. A amostra final, construída a partir de dados disponibilizados pela primeira vez para estudos desta natureza, contou com 662 empresas e 33 variáveis coletadas por meio de questionário aplicado diretamente às próprias empresas. A análise consistiu no teste de modelos econométricos, baseados na literatura dos modelos de duração, de forma a traduzir quais fatores são mais críticos para a sobrevivência das empresas a ponto de distingui-las em dois grupos: o das empresas vencedoras, cuja longevidade está pautada em ações que promovem ganhos de produtividade e eficiência, e aquelas desprovidas dessas ações e que muito provavelmente deixarão o mercado. Os três tipos de modelos abordados neste trabalho - não paramétrico, semi-paramétrico (riscos proporcionais) e paramétrico - apresentaram resultados similares, sendo que na abordagem de riscos proporcionais os resultados foram segmentados por tamanho e setor de atuação das empresas. Para as micro empresas, a idade do empreendedor e a iniciativa em investir na qualificação da mão de obra dos funcionários mostraram-se importantes mitigadores do risco de falha desse grupo de empresa, enquanto que para as pequenas empresas, a inovação em processos e a elaboração de um plano de negócios se destacaram dentre o conjunto de variáveis. Entre empresas dos setores de comércio e serviços, as empresas do primeiro grupo que faziam o acompanhamento das finanças (fluxo de caixa) apresentaram menor risco de falhar. Para aquelas do setor de serviços, a idade do empreendedor, o investimento em qualificação dos funcionários e o tamanho da empresa ao nascer foram importantes para reduzir o risco de falha no tempo. Outro resultado encontrado, por meio do modelo paramétrico utilizando distribuição Weibull, foi que o risco de a empresa deixar o mercado mostrou-se crescente, pelo menos nos cinco primeiros anos de existência da empresa. Entretanto, esse resultado não deve ser generalizado para períodos de tempo maiores que cinco anos.
This thesis presents the main results that determined the bankruptcy of enterprises located in the São Paulo State from 2003 to 2007. The models used in this work were possible due to the partnership with SEBRAE, Small Business Service Supporting, located in the State of São Paulo. This institution provided the data basis for this research and its final version was compound by 662 enterprises and 33 variables, which were collected from a survey done by SEBRAE and the related enterprise. For first time available for research like this The research was supported by econometrics models, more precisely duration models, which identified the most important factors regarding enterprises survival. Two enterprise groups were distinguished: that one that will survive and grow and another will fail. In this work, three models were used: parametric, non-parametric and proportional risk with all of them presenting similar results. The proportional risk approach was applied for economic sectors and enterprises size. For the micro size business, the entrepreneurship\'s age and the resources applied on the employee\'s qualification were important to reduce the risk to fail in the time, whereas for small enterprises, variables like innovation and business plan building were the most important variables. For the commerce and service sectors, the enterprises related to the first one, the enterprises which kept attention on financial results (cash flow) presented lower risk to fail. For service sector, variables such as: entrepreneur\'s age, investment on the employee\'s qualification and enterprise\'s size were the most important variables to explain the difference the risk to fail between the enterprises. Another result presented was the risk to fail, which indicates the likelihood of an enterprise to leave its business activity. In this case, the parametric model using Weibull distribution concluded that the risk grows in the first five years. However, this result must be carefully evaluated since it would be necessary a longer term data to ensure this result.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Gebremeskel, Haftu Gebrehiwot. « Implementing hierarchical bayesian model to fertility data : the case of Ethiopia ». Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3424458.

Texte intégral
Résumé :
Background: Ethiopia is a country with 9 ethnically-based administrative regions and 2 city administrations, often cited, among other things, with high fertility rates and rapid population growth rate. Despite the country’s effort in their reduction, they still remain high, especially at regional-level. To this end, the study of fertility in Ethiopia, particularly on its regions, where fertility variation and its repercussion are at boiling point, is paramount important. An easy way of finding different characteristics of a fertility distribution is to build a suitable model of fertility pattern through different mathematical curves. ASFR is worthwhile in this regard. In general, the age-specific fertility pattern is said to have a typical shape common to all human populations through years though many countries some from Africa has already started showing a deviation from this classical bell shaped curve. Some of existing models are therefore inadequate to describe patterns of many of the African countries including Ethiopia. In order to describe this shape (ASF curve), a number of parametric and non-parametric functions have been exploited in the developed world though fitting these models to curves of Africa in general and that of Ethiopian in particular data has not been undertaken yet. To accurately model fertility patterns in Ethiopia, a new mathematical model that is both easily used, and provides good fit for the data is required. Objective: The principal goals of this thesis are therefore fourfold: (1). to examine the pattern of ASFRs at country and regional level,in Ethiopia; (2). to propose a model that best captures various shapes of ASFRs at both country and regional level, and then compare the performance of the model with some existing ones; (3). to fit the proposed model using Hierarchical Bayesian techniques and show that this method is flexible enough for local estimates vis-´a-vis traditional formula, where the estimates might be very imprecise, due to low sample size; and (4). to compare the resulting estimates obtained with the non-hierarchical procedures, such as Bayesian and Maximum likelihood counterparts. Methodology: In this study, we proposed a four parametric parametric model, Skew Normal model, to fit the fertility schedules, and showed that it is flexible enough in capturing fertility patterns shown at country level and most regions of Ethiopia. In order to determine the performance of this proposed model, we conducted a preliminary analysis along with ten other commonly used parametric and non-parametric models in demographic literature, namely: Quadratic Spline function, Cubic Splines, Coale-Trussell function, Beta, Gamma, Hadwiger distribution, Polynomial models, the Adjusted Error Model, Gompertz curve, Skew Normal, and Peristera & Kostaki Model. The criterion followed in fitting these models was Nonlinear Regression with nonlinear least squares (nls) estimation. We used Akaike Information Criterion (AIC) as model selecction criterion. For many demographers, however, estimating regional-specific ASFR model and the associated uncertainty introduced due those factors can be difficult, especially in a situation where we have extremely varying sample size among different regions. Recently, it has been proposed that Hierarchical procedures might provide more reliable parameter estimates than Non-Hierarchical procedures, such as complete pooling and independence to make local/regional-level analyses. In this study, a Hierarchical Bayesian procedure was, therefore, formulated to explore the posterior distribution of model parameters (for generation of region-specific ASFR point estimates and uncertainty bound). Besides, other non-hierarchical approaches, namely Bayesian and the maximum likelihood methods, were also instrumented to estimate parameters and compare the result obtained using these approaches with Hierarchical Bayesian counterparts. Gibbs sampling along with MetropolisHastings argorithm in R (Development Core Team, 2005) was applied to draw the posterior samples for each parameter. Data augmentation method was also implemented to ease the sampling process. Sensitivity analysis, convergence diagnosis and model checking were also thoroughly conducted to ensure how robust our results are. In all cases, non-informative prior distributions for all regional vectors (parameters) were used in order to real the lack of knowledge about these random variables. Result: The results obtained from this preliminary analysis testified that the values of the Akaike Information criterion(AIC) for the proposed model, Skew Normal (SN), is lowest: in the capital, Addis Ababa, Dire Dawa, Harari, Affar, Gambela, Benshangul-Gumuz, and country level data as well. On the contrary, its value was also higher some of the models and lower the rest on the remain regions, namely: Tigray, Oromiya, Amhara, Somali and SNNP. This tells us that the proposed model was able to capturing the pattern of fertility at the empirical fertility data of Ethiopia and its regions better than the other existing models considered in 6 of the 11 regions. The result from the HBA indicates that most of the posterior means were much closer to the true fixed fertility values. They were also more precise and have lower uncertainty with narrower credible interval vis-´a-vis the other approaches, ML and Bayesian estimate analogues. Conclusion: From the preliminary analysis, it can be concluded that the proposed model was better to capture ASFR pattern at national level and its regions than the other existing common models considered. Following this result, we conducted inference and prediction on the model parameters using these three approaches: HBA, BA and ML methods. The overall result suggested several points. One such is that HBA was the best approach to implement for such a data as it gave more consistent, precise (the low uncertainty) than the other approaches. Generally, both ML method and Bayesian method can be used to analyze our model, but they can be applicable to different conditions. ML method can be applied when precise values of model parameters have been known, large sample size can be obtained in the test; and similarly, Bayesian method can be applied when uncertainties on the model parameters exist, prior knowledge on the model parameters are available, and few data is available in the study.
Background: L’Etiopia è una nazione divisa in 9 regioni amministrative (definite su base etnica) e due città. Si tratta di una nazione citata spesso come esempio di alta fecondità e rapida crescita demografica. Nonostante gli sforzi del governo, fecondità e cresita della popolazione rimangono elevati, specialmente a livello regionale. Pertanto, lo studio della fecondità in Etiopia e nelle sue regioni – caraterizzate da un’alta variabilità – è di vitale importanza. Un modo semplice di rilevare le diverse caratteristiche della distribuzione della feconditàè quello di costruire in modello adatto, specificando diverse funzioni matematiche. In questo senso, vale la pena concentrarsi sui tassi specifici di fecondità, i quali mostrano una precisa forma comune a tutte le popolazioni. Tuttavia, molti paesi mostrano una “simmetrizzazione” che molti modelli non riescono a cogliere adeguatamente. Pertanto, per cogliere questa la forma dei tassi specifici, sono stati utilizzati alcuni modelli parametrici ma l’uso di tali modelliè ancora molto limitato in Africa ed in Etiopia in particolare. Obiettivo: In questo lavoro si utilizza un nuovo modello per modellare la fecondità in Etiopia con quattro obiettivi specifici: (1). esaminare la forma dei tassi specifici per età dell’Etiopia a livello nazionale e regionale; (2). proporre un modello che colga al meglio le varie forme dei tassi specifici sia a livello nazionale che regionale. La performance del modello proposto verrà confrontata con quella di altri modelli esistenti; (3). adattare la funzione di fecondità proposta attraverso un modello gerarchico Bayesiano e mostrare che tale modelloè sufficientemente flessibile per stimare la fecondità delle singole regioni – dove le stime possono essere imprecise a causa di una bassa numerosità campionaria; (4). confrontare le stime ottenute con quelle fornite da metodi non gerarchici (massima verosimiglianza o Bayesiana semplice) Metodologia: In questo studio, proponiamo un modello a 4 parametri, la Normale Asimmetrica, per modellare i tassi specifici di fecondità. Si mostra che questo modello è sufficientemente flessibile per cogliere adeguatamente le forme dei tassi specifici a livello sia nazionale che regionale. Per valutare la performance del modello, si è condotta un’analisi preliminare confrontandolo con altri dieci modelli parametrici e non parametrici usati nella letteratura demografica: la funzione splie quadratica, la Cubic-Spline, i modelli di Coale e Trussel, Beta, Gamma, Hadwiger, polinomiale, Gompertz, Peristera-Kostaki e l’Adjustment Error Model. I modelli sono stati stimati usando i minimi quadrati non lineari (nls) e il Criterio d’Informazione di Akaike viene usato per determinarne la performance. Tuttavia, la stima per le singole regioni pu‘o risultare difficile in situazioni dove abbiamo un’alta variabilità della numerosità campionaria. Si propone, quindi di usare procedure gerarchiche che permettono di ottenere stime più affidabili rispetto ai modelli non gerarchici (“pooling” completo o “unpooling”) per l’analisi a livello regionale. In questo studia si formula un modello Bayesiano gerarchico ottenendo la distribuzione a posteriori dei parametri per i tassi di fecnodità specifici a livello regionale e relativa stima dell’incertezza. Altri metodi non gerarchici (Bayesiano semplice e massima verosimiglianza) vengono anch’essi usati per confronto. Gli algoritmi Gibbs Sampling e Metropolis-Hastings vengono usati per campionare dalla distribuzione a posteriori di ogni parametro. Anche il metodo del “Data Augmentation” viene utilizzato per ottenere le stime. La robustezza dei risultati viene controllata attraverso un’analisi di sensibilità e l’opportuna diagnostica della convergenza degli algoritmi viene riportata nel testo. In tutti i casi, si sono usate distribuzioni a priori non-informative. Risultati: I risutlati ottenuti dall’analisi preliminare mostrano che il modello Skew Normal ha il pi`u basso AIC nelle regioni Addis Ababa, Dire Dawa, Harari, Affar, Gambela, Benshangul-Gumuz e anche per le stime nazionali. Nelle altre regioni (Tigray, Oromiya, Amhara, Somali e SNNP) il modello Skew Normal non risulta il milgiore, ma comunque mostra un buon adattamento ai dati. Dunque, il modello Skew Normal risulta il migliore in 6 regioni su 11 e sui tassi specifici di tutto il paese. Conclusioni: Dunque, il modello Skew Normal risulta globalmente il migliore. Da questo risultato iniziale, siè partiti per costruire i modelli Gerachico Bayesiano, Bayesiano semplice e di massima verosimiglianza. Il risultato del confronto tra questi tre approcci è che il modello gerarchico fornisce stime più preciso rispetto agli altri.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Le, Corff Sylvain. « Estimations pour les modèles de Markov cachés et approximations particulaires : Application à la cartographie et à la localisation simultanées ». Phd thesis, Telecom ParisTech, 2012. http://tel.archives-ouvertes.fr/tel-00773405.

Texte intégral
Résumé :
Dans cette thèse, nous nous intéressons à l'estimation de paramètres dans les chaînes de Markov cachées dans un cadre paramétrique et dans un cadre non paramétrique. Dans le cas paramétrique, nous imposons des contraintes sur le calcul de l'estimateur proposé : un premier volet de cette thèse est l'estimation en ligne d'un paramètre au sens du maximum de vraisemblance. Le fait d'estimer en ligne signifie que les estimations doivent être produites sans mémoriser les observations. Nous proposons une nouvelle méthode d'estimation en ligne pour les chaînes de Markov cachées basée sur l'algorithme Expectation Maximization appelée Block Online Expectation Maximization (BOEM). Cet algorithme est défini pour des chaînes de Markov cachées à espace d'état et espace d'observations généraux. La consistance de l'algorithme ainsi que des vitesses de convergence en probabilité ont été prouvées. Dans le cas d'espaces d'états généraux, l'implémentation numérique de l'algorithme BOEM requiert d'introduire des méthodes de Monte Carlo séquentielles - aussi appelées méthodes particulaires - pour approcher des espérances conditionnelles sous des lois de lissage qui ne peuvent être calculées explicitement. Nous avons donc proposé une approximation Monte Carlo de l'algorithme BOEM appelée Monte Carlo BOEM. Parmi les hypothèses nécessaires à la convergence de l'algorithme Monte Carlo BOEM, un contrôle de la norme Lp de l'erreur d'approximation Monte Carlo explicite en fonction du nombre d'observations T et du nombre de particules N est nécessaire. Par conséquent, une seconde partie de cette thèse a été consacrée à l'obtention de tels contrôles pour plusieurs méthodes de Monte Carlo séquentielles : l'algorithme Forward Filtering Backward Smoothing et l'algorithme Forward Filtering Backward Simulation. Ensuite, nous considérons des applications de l'algorithme Monte Carlo BOEM à des problèmes de cartographie et de localisation simultanées. Ces problèmes se posent lorsqu'un mobile se déplace dans un environnement inconnu. Il s'agit alors de localiser le mobile tout en construisant une carte de son environnement. Enfin, la dernière partie de cette thèse est relative à l'estimation non paramétrique dans les chaînes de Markov cachées. Le problème considéré a été très peu étudié et nous avons donc choisi de l'aborder dans un cadre précis. Nous supposons que la chaîne (Xk) est une marche aléatoire sur un sous-espace compact de Rm dont la loi des incréments est connue à un facteur d'échelle a près. Nous supposons également que, pour tout k, Yk est une observation dans un bruit additif gaussien de f(Xk), où f est une fonction à valeurs dans Rl que nous cherchons à estimer. Le premier résultat que nous avons établi est l'identifiabilité du modèle statistique considéré. Nous avons également proposé une estimation de la fonction f et du paramètre a à partir de la log-vraisemblance par paires des observations. Nous avons prouvé la convergence en probabilité de ces estimateurs lorsque le nombre d'observations utilisées tend vers l'infini.
Styles APA, Harvard, Vancouver, ISO, etc.
31

ABUABIAH, MOHAMMAD IBRAHIM FAREED. « A set-membership approach to direct data-driven control design ». Doctoral thesis, Politecnico di Torino, 2019. http://hdl.handle.net/11583/2737672.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
32

Belhedi, Amira. « Modélisation du bruit et étalonnage de la mesure de profondeur des caméras Temps-de-Vol ». Thesis, Clermont-Ferrand 1, 2013. http://www.theses.fr/2013CLF1MM08/document.

Texte intégral
Résumé :
Avec l'apparition récente des caméras 3D, des perspectives nouvelles pour différentes applications de l'interprétation de scène se sont ouvertes. Cependant, ces caméras ont des limites qui affectent la précision de leurs mesures. En particulier pour les caméras Temps-de-Vol, deux types d'erreur peuvent être distingués : le bruit statistique de la caméra et la distorsion de la mesure de profondeur. Dans les travaux de la littérature des caméras Temps-de-Vol, le bruit est peu étudié et les modèles de distorsion de la mesure de profondeur sont généralement difficiles à mettre en œuvre et ne garantissent pas la précision requise pour certaines applications. L'objectif de cette thèse est donc d'étudier, modéliser et proposer un étalonnage précis et facile à mettre en œuvre de ces 2 types d'erreur des caméras Temps-de-Vol. Pour la modélisation du bruit comme pour la distorsion de la mesure de profondeur, deux solutions sont proposées présentant chacune une solution à un problème différent. La première vise à fournir un modèle précis alors que le second favorise la simplicité de la mise en œuvre. Ainsi, pour le bruit, alors que la majorité des modèles reposent uniquement sur l'information d'amplitude, nous proposons un premier modèle qui intègre aussi la position du pixel dans l'image. Pour encore une meilleure précision, nous proposons un modèle où l'amplitude est remplacée par la profondeur de l'objet et le temps d'intégration. S'agissant de la distorsion de la mesure de profondeur, nous proposons une première solution basée sur un modèle non-paramétrique garantissant une meilleure précision. Ensuite, pour fournir une solution plus facile à mettre en œuvre que la précédente et que celles de l'état de l'art, nous nous basons sur la connaissance à priori de la géométrie planaire de la scène observée
3D cameras open new possibilities in different fields such as 3D reconstruction, Augmented Reality and video-surveillance since they provide depth information at high frame-rates. However, they have limitations that affect the accuracy of their measures. In particular for TOF cameras, two types of error can be distinguished : the stochastic camera noise and the depth distortion. In state of the art of TOF cameras, the noise is not well studied and the depth distortion models are difficult to use and don't guarantee the accuracy required for some applications. The objective of this thesis is to study, to model and to propose a calibration method of these two errors of TOF cameras which is accurate and easy to set up. Both for the noise and for the depth distortion, two solutions are proposed. Each of them gives a solution for a different problem. The former aims to obtain an accurate model. The latter, promotes the simplicity of the set up. Thereby, for the noise, while the majority of the proposed models are only based on the amplitude information, we propose a first model which integrate also the pixel position in the image. For a better accuracy, we propose a second model where we replace the amplitude by the depth and the integration time. Regarding the depth distortion, we propose a first solution based on a non-parametric model which guarantee a better accuracy. Then, we use the prior knowledge of the planar geometry of the observed scene to provide a solution which is easier to use compared to the previous one and to those of the litterature
Styles APA, Harvard, Vancouver, ISO, etc.
33

GITTO, SIMONE. « The measurement of productivity and efficiency : theory and applications ». Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2009. http://hdl.handle.net/2108/835.

Texte intégral
Résumé :
Questa tesi discute diversi metodi per la stima della produttivita’ ed efficienza. Sono discussi i recenti miglioramenti nelle tecniche di stima con tre diverse applicazioni. Nella prima applicazione, l’indice di Tornqvist e’ usato per stimare la produttivita’ totale dei fattori di Alitalia, la principale compagnia area Italiana. Nella seconda applicazione, la DEA boostrata e’ impiegata per investigare l’efficienza e le caratteristiche manageriali degli aeroporti italiani. Nella terza applicazione, uno stimatore iperbolico alpha-quantile e’ utilizzato nello studio dell’efficienza e della produttività degli ospedali italiani.
This thesis shows several methods to measure the productivity and the efficiency. Recent improvements in methods are discussed and three applications are reported. In particular, I presented an application of the Tornqvist index numbers to measure the total factor productivity of Alitalia, the main Italian airline; a study of Italian airport sector with the use of bootstrapped-DEA; and an investigation of the efficiency of public Italian hospitals using a hyperbolic, alpha-quantile estimator.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Eamrurksiri, Araya. « Applying Machine Learning to LTE/5G Performance Trend Analysis ». Thesis, Linköpings universitet, Statistik och maskininlärning, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139126.

Texte intégral
Résumé :
The core idea of this thesis is to reduce the workload of manual inspection when the performance analysis of an updated software is required. The Central Process- ing Unit (CPU) utilization, which is one of the essential factors for evaluating the performance, is analyzed. The purpose of this work is to apply machine learning techniques that are suitable for detecting the state of the CPU utilization and any changes in the test environment that affects the CPU utilization. The detection re- lies on a Markov switching model to identify structural changes, which are assumed to follow an unobserved Markov chain, in the time series data. A historical behav- ior of the data can be described by a first-order autoregression. Then, the Markov switching model becomes a Markov switching autoregressive model. Another ap- proach based on a non-parametric analysis, a distribution-free method that requires fewer assumptions, called an E-divisive method, is proposed. This method uses a hi- erarchical clustering algorithm to detect multiple change point locations in the time series data. As the data used in this analysis does not contain any ground truth, the evaluation of the methods is analyzed by generating simulated datasets with known states. Besides, these simulated datasets are used for studying and compar- ing between the Markov switching autoregressive model and the E-divisive method. Results show that the former method is preferable because of its better performance in detecting changes. Some information about the state of the CPU utilization are also obtained from performing the Markov switching model. The E-divisive method is proved to have less power in detecting changes and has a higher rate of missed detections. The results from applying the Markov switching autoregressive model to the real data are presented with interpretations and discussions.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Balzotti, Christopher Stephen. « Multidisciplinary Assessment and Documentation of Past and Present Human Impacts on the Neotropical Forests of Petén, Guatemala ». BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2129.

Texte intégral
Résumé :
Tropical forests provide important habitat for a tremendous diversity of plant and animal species. However, limitations in measuring and monitoring the structure and function of tropical forests has caused these systems to remain poorly understood. Remote-sensing technology has provided a powerful tool for quantification of structural patterns and associating these with resource use. Satellite and aerial platforms can be used to collect remotely sensed images of tropical forests that can be applied to ecological research and management. Chapter 1 of this article highlights the resources available for tropical forest remote sensing and presents a case-study that demonstrates its application to a neotropical forest located in the Petén region of northern Guatemala. The ancient polity of Tikal has been extensively studied by archaeologists and soil scientists, but little is known about the subsistence and ancient farming techniques that sustained its inhabitants. The objective of chapter 2 was to create predictive models for ancient maize (Zea mays L.) agriculture in the Tikal National Park, Petén, Guatemala, improving our understanding of settlement patterns and the ecological potentials surrounding the site in a cost effective manner. Ancient maize agriculture was described in this study as carbon (C) isotopic signatures left in the soil humin fraction. Probability models predicting C isotopic enrichment and carbonate C were used to outline areas of potential long term maize agriculture. It was found that the Tikal area not only supports a great variety of potential food production systems but the models suggest multiple maize agricultural practices were used.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Chikhaoui, Khaoula. « Conception robuste de structures périodiques à non-linéarités fonctionnelles ». Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCD029/document.

Texte intégral
Résumé :
L’analyse dynamique des structures de grandes dimensions incluant de nombreux paramètres incertains et des non-linéarités localisées ou réparties peut être numériquement prohibitive. Afin de surmonter ce problème, des modèles d’approximation peuvent être développés pour reproduire avec précision et à faible coût de calcul la réponse de la structure.L’objectif de la première partie de ce mémoire est de développer des modèles numériques robustes vis-à-vis des modifications structurales (non-linéarités localisées, perturbations ou incertitudes paramétriques) et « légers » au sens de la réduction de la taille. Ces modèles sont construits, selon les approches de condensation directe et par synthèse modale, en enrichissant des bases de réduction tronquées, modale et de Craig-Bampton respectivement, avec des résidus statiques prenant compte des modifications structurales. Pour propager les incertitudes, l’accent est mis particulièrement sur la méthode du chaos polynomial généralisé. Sa combinaison avec les modèles réduits ainsi obtenus permet de créer des métamodèles mono et bi-niveaux, respectivement. Les deux métamodèles proposés sont comparés à d’autres métamodèles basés sur les méthodes du chaos polynomial généralisé et du Latin Hypercube appliquées sur des modèles complets et réduits. Les métamodèles proposés permettent d’approximer les comportements structuraux avec un coût de calcul raisonnable et sans perte significative de précision.La deuxième partie de ce mémoire est consacrée à l’analyse dynamique des structures périodiques non-linéaires en présence des imperfections : perturbations des paramètres structuraux ou incertitudes paramétriques. Deux études : déterministe ou stochastique, respectivement, sont donc menées. Pour ces deux configurations, un modèle analytique discret générique est proposé. Il consiste à appliquer la méthode des échelles multiples et la méthode de perturbation pour résoudre l’équation de mouvement et de projecter la solution obtenue sur des modes d’ondes stationnaires. Le modèle proposé conduit à un ensemble d’équations algébriques complexes couplées, fonctions du nombre et des positions des imperfections dans la structure. La propagation des incertitudes à travers le modèle ainsi construit est finalement assurée par les méthodes du Latin Hypercube et du chaos polynomial généralisé. La robustesse de la dynamique collective vis-à-vis des imperfections est étudiée à travers l’analyse statistique de la dispersion des réponses fréquentielles et des bassins d’attraction dans le domaine de multistabilité. L’étude numérique montre que la présence des imperfections dans une structure périodique renforce sa non-linéarité, élargit son domaine de multistabilité et génère une multiplicité de branches multimodale
Dynamic analysis of large scale structures including several uncertain parameters and localized or distributed nonlinearities may be computationally unaffordable. In order to overcome this issue, approximation models can be developed to reproduce accurately the structural response at a low computational cost.The purpose of the first part of this thesis is to develop numerical models which must be robust against structural modifications (localized nonlinearities, parametric uncertainties or perturbations) and reduce the size of the initial problem. These models are created, according to the direct condensation and the component mode synthesis, by enriching truncated reduction modal bases and Craig-Bampton transformations, respectively, with static residual vectors accounting for the structural modifications. To propagate uncertainties through these first-level and second-level reduced order models, respectively, we focus particularly on the generalized polynomial chaos method. This methods combination allows creating first-level and second-level metamodels, respectively. The two proposed metamodels are compared to other metamodels based on the polynomial chaos method and Latin Hypercube method applied on reduced and full models. The proposed metamodels allow approximating the structural behavior at a low computational cost without a significant loss of accuracy.The second part of this thesis is devoted to the dynamic analysis of nonlinear periodic structures in presence of imperfections: parametric perturbations or uncertainties. Deterministic or stochastic analyses, respectively, are therefore carried out. For both configurations, a generic discrete analytical model is proposed. It consists in applying the multiple scales method and the perturbation theory to solve the equation of motion and then on projecting the resulting solution on standing wave modes. The proposed model leads to a set of coupled complex algebraic equations, depending on the number and positions of imperfections in the structure. Uncertainty propagation through the proposed model is finally done using the Latin Hypercube method and the generalized polynomial chaos expansion. The robustness the collective dynamics against imperfections is studied through statistical analysis of the frequency responses and the basins of attraction dispersions in the multistability domain. Numerical results show that the presence of imperfections in a periodic structure strengthens its nonlinearity, expands its multistability domain and generates a multiplicity of multimodal branches
Styles APA, Harvard, Vancouver, ISO, etc.
37

Channarond, Antoine. « Recherche de structure dans un graphe aléatoire : modèles à espace latent ». Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112338/document.

Texte intégral
Résumé :
Cette thèse aborde le problème de la recherche d'une structure (ou clustering) dans lesnoeuds d'un graphe. Dans le cadre des modèles aléatoires à variables latentes, on attribue à chaque noeud i une variable aléatoire non observée (latente) Zi, et la probabilité de connexion des noeuds i et j dépend conditionnellement de Zi et Zj . Contrairement au modèle d'Erdos-Rényi, les connexions ne sont pas indépendantes identiquement distribuées; les variables latentes régissent la loi des connexions des noeuds. Ces modèles sont donc hétérogènes, et leur structure est décrite par les variables latentes et leur loi; ce pourquoi on s'attache à en faire l'inférence à partir du graphe, seule variable observée.La volonté commune des deux travaux originaux de cette thèse est de proposer des méthodes d'inférence de ces modèles, consistentes et de complexité algorithmique au plus linéaire en le nombre de noeuds ou d'arêtes, de sorte à pouvoir traiter de grands graphes en temps raisonnable. Ils sont aussi tous deux fondés sur une étude fine de la distribution des degrés, normalisés de façon convenable selon le modèle.Le premier travail concerne le Stochastic Blockmodel. Nous y montrons la consistence d'un algorithme de classiffcation non supervisée à l'aide d'inégalités de concentration. Nous en déduisons une méthode d'estimation des paramètres, de sélection de modèles pour le nombre de classes latentes, et un test de la présence d'une ou plusieurs classes latentes (absence ou présence de clustering), et nous montrons leur consistence.Dans le deuxième travail, les variables latentes sont des positions dans l'espace ℝd, admettant une densité f, et la probabilité de connexion dépend de la distance entre les positions des noeuds. Les clusters sont définis comme les composantes connexes de l'ensemble de niveau t > 0 fixé de f, et l'objectif est d'en estimer le nombre à partir du graphe. Nous estimons la densité en les positions latentes des noeuds grâce à leur degré, ce qui permet d'établir une correspondance entre les clusters et les composantes connexes de certains sous-graphes du graphe observé, obtenus en retirant les nœuds de faible degré. En particulier, nous en déduisons un estimateur du nombre de clusters et montrons saconsistence en un certain sens
.This thesis addresses the clustering of the nodes of a graph, in the framework of randommodels with latent variables. To each node i is allocated an unobserved (latent) variable Zi and the probability of nodes i and j being connected depends conditionally on Zi and Zj . Unlike Erdos-Renyi's model, connections are not independent identically distributed; the latent variables rule the connection distribution of the nodes. These models are thus heterogeneous and their structure is fully described by the latent variables and their distribution. Hence we aim at infering them from the graph, which the only observed data.In both original works of this thesis, we propose consistent inference methods with a computational cost no more than linear with respect to the number of nodes or edges, so that large graphs can be processed in a reasonable time. They both are based on a study of the distribution of the degrees, which are normalized in a convenient way for the model.The first work deals with the Stochastic Blockmodel. We show the consistency of an unsupervised classiffcation algorithm using concentration inequalities. We deduce from it a parametric estimation method, a model selection method for the number of latent classes, and a clustering test (testing whether there is one cluster or more), which are all proved to be consistent. In the second work, the latent variables are positions in the ℝd space, having a density f. The connection probability depends on the distance between the node positions. The clusters are defined as connected components of some level set of f. The goal is to estimate the number of such clusters from the observed graph only. We estimate the density at the latent positions of the nodes with their degree, which allows to establish a link between clusters and connected components of some subgraphs of the observed graph, obtained by removing low degree nodes. In particular, we thus derive an estimator of the cluster number and we also show the consistency in some sense
Styles APA, Harvard, Vancouver, ISO, etc.
38

Nacanabo, Amade. « Impact des chocs climatiques sur la sécurité alimentaire dans les pays sahéliens : approches macroéconomiques et microéconomiques ». Electronic Thesis or Diss., Toulon, 2021. http://www.theses.fr/2021TOUL2007.

Texte intégral
Résumé :
Utilisé très souvent de façon métaphorique pour désigner les franges méridionales du Sahara, le Sahel du fait de sa position géographique est une région vulnérable au changement climatique. L’agriculture est fortement pluviale et largement dépendante des conditions climatiques. La prise en compte du changement climatique est indispensable dans la réalisation de la sécurité alimentaire au Sahel. En alliant travaux empiriques et théoriques, cette thèse se propose de contribuer à une meilleure compréhension de l’incidence du changement climatique sur la sécurité alimentaire au Sahel au niveau microéconomique et macroéconomique. Le premier chapitre examine au niveau macroéconomique, la situation de la sécurité alimentaire au Sahel, après avoir analysé son dynamisme démographique. Les résultats de ce chapitre montrent que le Sahel n’a pas encore entamé sa transition démographique. Le taux de croissance démographique est élevé par rapport à la moyenne de l’Afrique subsaharienne. La sous-alimentation est en baisse mais reste prégnante dans cette région. Réduire la sous-alimentation passe nécessairement par la production agricole, qui est tributaire des aléas climatiques. Le deuxième chapitre s’intéresse donc aux effets du changement climatique sur les rendements de certaines cultures (mil, sorgho et maïs) au Sahel. Les résultats indiquent que le changement climatique a un impact globalement négatif sur les rendements agricoles au Sahel. Cette analyse au niveau macroéconomique est ensuite complétée par deux chapitres qui, à un niveau microéconomique, se focalisent sur le comportement des agriculteurs au Sahel. Le troisième chapitre cherche ainsi à analyser l’impact des chocs climatiques mesurés par la perception des agriculteurs sur l’inefficience des parcelles agricoles. Il ressort de cette étude que les chocs climatiques augmentent l’inefficience des parcelles agricoles. A travers la baisse des rendements et l’inefficience des parcelles, le changement climatique peut affecter la pauvreté et la vulnérabilité alimentaire des ménages agricoles burkinabés. A cet effet, le quatrième chapitre identifie les déterminants individuels et contextuels de la pauvreté et la vulnérabilité alimentaire des ménages agricoles burkinabés. Les résultats relèvent qu’en plus des caractéristiques individuelles du ménage agricole comme sa taille ou le niveau d’éducation du chef du ménage, le contexte climatique de résidence permet d’expliquer sa pauvreté et vulnérabilité alimentaire
Often used metaphorically to refer to the southern fringes of the Sahara, the Sahel's geographical position makes it a region vulnerable to climate change. Agriculture is highly rain-fed and largely dependent on climatic conditions. If food security is to be achieved in the Sahel, climate change must be taken into account. By combining empirical and theoretical work, this thesis aims to contribute to a better understanding of the impact of climate change on food security in the Sahel at the microeconomic and macroeconomic levels. The first chapter examines the food security situation in the Sahel at the macroeconomic level, after analysing its demographic dynamism. The results of this chapter show that the Sahel has not yet begun its demographic transition. The demographic growth rate is high compared with the average for sub-Saharan Africa. Undernourishment is on the decline, but remains prevalent in the region. Reducing undernourishment necessarily involves agricultural production, which is dependent on the vagaries of the climate. The second chapter therefore looks at the effects of climate change on the yields of certain crops (millet, sorghum and maize) in the Sahel. The results indicate that climate change is having an overall negative impact on agricultural yields in the Sahel. This analysis at the macroeconomic level is then supplemented by two chapters which, at the microeconomic level, focus on the behaviour of farmers in the Sahel. The third chapter seeks to analyse the impact of climatic shocks, as measured by farmers' perceptions, on the inefficiency of agricultural plots. This study shows that climatic shocks increase the inefficiency of agricultural plots. Through lower yields and plot inefficiency, climate change may affect the poverty and food vulnerability of Burkinabé farming households. To this end, the fourth chapter identifies the individual and contextual determinants of poverty and food vulnerability among farming households in Burkina Faso. The results show that, in addition to the individual characteristics of farm households, such as their size or the level of education of the head of household, the climatic context in which they live helps to explain their poverty and food vulnerability
Styles APA, Harvard, Vancouver, ISO, etc.
39

Iuga, Relu Adrian. « Modélisation et analyse statistique de la formation des prix à travers les échelles, Market impact ». Thesis, Paris Est, 2014. http://www.theses.fr/2014PEST1090/document.

Texte intégral
Résumé :
Le développement des marchés électroniques organisés induit une pression constante sur la recherche académique en finance. L'impact sur le prix d'une transaction boursière portant sur une grande quantité d'actions sur une période courte est un sujet central. Contrôler et surveiller l'impact sur le prix est d'un grand intérêt pour les praticiens, sa modélisation est ainsi devenue un point central de la recherche quantitative de la finance. Historiquement, le calcul stochastique s'est progressivement imposé en finance, sous l'hypothèse implicite que les prix des actifs satisfont à des dynamiques diffusives. Mais ces hypothèses ne tiennent pas au niveau de la ``formation des prix'', c'est-à-dire lorsque l'on se place dans les échelles fines des participants de marché. Des nouvelles techniques mathématiques issues de la statistique des processus ponctuels s'imposent donc progressivement. Les observables (prix traité, prix milieu) apparaissent comme des événements se réalisant sur un réseau discret, le carnet d'ordre, et ceci à des échelles de temps très courtes (quelques dizaines de millisecondes). L'approche des prix vus comme des diffusions browniennes satisfaisant à des conditions d'équilibre devient plutôt une description macroscopique de phénomènes complexes issus de la formation des prix. Dans un premier chapitre, nous passons en revue les propriétés des marchés électroniques. Nous rappelons la limite des modèles diffusifs et introduisons les processus de Hawkes. En particulier, nous faisons un compte rendu de la recherche concernant le maket impact et nous présentons les avancées de cette thèse. Dans une seconde partie, nous introduisons un nouveau modèle d'impact à temps continu et espace discret en utilisant les processus de Hawkes. Nous montrons que ce modèle tient compte de la microstructure des marchés et est capable de reproduire des résultats empiriques récents comme la concavité de l'impact temporaire. Dans le troisième chapitre, nous étudions l'impact d'un grand volume d'action sur le processus de formation des prix à l'échelle journalière et à une plus grande échelle (plusieurs jours après l'exécution). Par ailleurs, nous utilisons notre modèle pour mettre en avant des nouveaux faits stylisés découverts dans notre base de données. Dans une quatrième partie, nous nous intéressons à une méthode non-paramétrique d'estimation pour un processus de Hawkes unidimensionnel. Cette méthode repose sur le lien entre la fonction d'auto-covariance et le noyau du processus de Hawkes. En particulier, nous étudions les performances de cet estimateur dans le sens de l'erreur quadratique sur les espaces de Sobolev et sur une certaine classe contenant des fonctions « très » lisses
The development of organized electronic markets induces a constant pressure on academic research in finance. A central issue is the market impact, i.e. the impact on the price of a transaction involving a large amount of shares over a short period of time. Monitoring and controlling the market impact is of great interest for practitioners; its modeling and has thus become a central point of quantitative finance research. Historically, stochastic calculus gradually imposed in finance, under the assumption that the price satisfies a diffusive dynamic. But this assumption is not appropriate at the level of ”price formation”, i.e. when looking at the fine scales of market participants, and new mathematical techniques are needed as the point processes. The price (last trade, mid-price) appears as events on a discrete network, the order book, at very short time scales (milliseconds). The Brownien motion becomes rather a macroscopic description of the complex price formation process. In the first chapter, we review the properties of electronic markets. We recall the limit of diffusive models and introduce the Hawkes processes. In particular, we make a review of the market impact research and present this thesis advanced. In the second part, we introduce a new model for market impact model at continuous time and living on a discrete space using process Hawkes. We show that this model that takes into account the market microstructure and it is able to reproduce recent empirical results as the concavity of the temporary impact. In the third chapter, we investigate the impact of large orders on the price formation process at intraday scale and at a larger scale (several days after the meta-order execution). Besides, we use our model to discuss stylized facts discovered in the database. In the fourth part, we focus on the non-parametric estimation for univariate Hawkes processes. Our method relies on the link between the auto-covariance function and the kernel process. In particular, we study the performance of the estimator in squared error loss over Sobolev spaces and over a certain class containing "very'' smooth functions
Styles APA, Harvard, Vancouver, ISO, etc.
40

Kamari, Halaleh. « Qualité prédictive des méta-modèles construits sur des espaces de Hilbert à noyau auto-reproduisant et analyse de sensibilité des modèles complexes ». Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASE010.

Texte intégral
Résumé :
Ce travail porte sur le problème de l'estimation d'un méta-modèle d'un modèle complexe, noté m. Le modèle m dépend de d variables d'entrées X1,...,Xd qui sont indépendantes et ont une loi connue. Le méta-modèle, noté f∗, approche la décomposition de Hoeffding de m et permet d'estimer ses indices de Sobol. Il appartient à un espace de Hilbert à noyau auto-reproduisant (RKHS), noté H, qui est construit comme une somme directe d'espaces de Hilbert (Durrande et al. (2013)). L'estimateur du méta-modèle, noté f^, est calculé en minimisant un critère des moindres carrés pénalisé par la somme de la norme de Hilbert et de la norme empirique L2 (Huet and Taupin (2017)). Cette procédure, appelée RKHS ridge groupe sparse, permet à la fois de sélectionner et d'estimer les termes de la décomposition de Hoeffding, et donc de sélectionner les indices de Sobol non-nuls et de les estimer. Il permet d'estimer les indices de Sobol même d'ordre élevé, un point connu pour être difficile à mettre en pratique.Ce travail se compose d'une partie théorique et d'une partie pratique. Dans la partie théorique, j'ai établi les majorations du risque empirique L2 et du risque quadratique de l'estimateur f^ d'un modèle de régression où l'erreur est non-gaussienne et non-bornée. Il s'agit des bornes supérieures par rapport à la norme empirique L2 et à la norme L2 pour la distance entre le modèle m et son estimation f^ dans le RKHS H. Dans la partie pratique, j'ai développé un package R appelé RKHSMetaMod, pour la mise en œuvre des méthodes d'estimation du méta-modèle f∗ de m. Ce package s'applique indifféremment dans le cas où le modèle m est calculable et le cas du modèle de régression. Afin d'optimiser le temps de calcul et la mémoire de stockage, toutes les fonctions de ce package ont été écrites en utilisant les bibliothèques GSL et Eigen de C++ à l'exception d'une fonction qui est écrite en R. Elles sont ensuite interfacées avec l'environnement R afin de proposer un package facilement exploitable aux utilisateurs. La performance des fonctions du package en termes de qualité prédictive de l'estimateur et de l'estimation des indices de Sobol, est validée par une étude de simulation
In this work, the problem of estimating a meta-model of a complex model, denoted m, is considered. The model m depends on d input variables X1 , ..., Xd that are independent and have a known law. The meta-model, denoted f ∗ , approximates the Hoeffding decomposition of m, and allows to estimate its Sobol indices. It belongs to a reproducing kernel Hilbert space (RKHS), denoted H, which is constructed as a direct sum of Hilbert spaces (Durrande et al. (2013)). The estimator of the meta-model, denoted f^, is calculated by minimizing a least-squares criterion penalized by the sum of the Hilbert norm and the empirical L2-norm (Huet and Taupin (2017)). This procedure, called RKHS ridge group sparse, allows both to select and estimate the terms in the Hoeffding decomposition, and therefore, to select the Sobol indices that are non-zero and estimate them. It makes possible to estimate the Sobol indices even of high order, a point known to be difficult in practice.This work consists of a theoretical part and a practical part. In the theoretical part, I established upper bounds of the empirical L2 risk and the L2 risk of the estimator f^. That is, upper bounds with respect to the L2-norm and the empirical L2-norm for the f^ distance between the model m and its estimation f into the RKHS H. In the practical part, I developed an R package, called RKHSMetaMod, that implements the RKHS ridge group sparse procedure and a spacial case of it called the RKHS group lasso procedure. This package can be applied to a known model that is calculable in all points or an unknown regression model. In order to optimize the execution time and the storage memory, except for a function that is written in R, all of the functions of the RKHSMetaMod package are written using C++ libraries GSL and Eigen. These functions are then interfaced with the R environment in order to propose an user friendly package. The performance of the package functions in terms of the predictive quality of the estimator and the estimation of the Sobol indices, is validated by a simulation study
Styles APA, Harvard, Vancouver, ISO, etc.
41

Magnant, Clément. « Approches bayésiennes pour le pistage radar de cibles de surface potentiellement manoeuvrantes ». Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0136/document.

Texte intégral
Résumé :
Dans le cadre de la surveillance maritime ou terrestre par radar aéroporté, l’un des principaux objectifs est de détecter et de poursuivre une grande diversité de cibles au cours du temps.Ces traitements s’appuient généralement sur l’utilisation d’un filtre Bayésien pour estimer récursivement les paramètres cinématiques (position, vitesse et accélération) des cibles. Il repose surla représentation dans l’espace d’état du système et plus particulièrement sur la modélisation a priori de l’évolution des cibles à partir d’un modèle de mouvement (mouvement rectiligne uniforme, mouvement uniformément accéléré, mouvement rotationnel, etc.). Si les cibles pistées sont manoeuvrantes, plusieurs modèles de mouvement, chacun avec une dynamique prédéfinie,sont classiquement combinés au sein d’une structure à modèles multiples. Même si ces approches s’avèrent pertinentes, des améliorations peuvent être apportées à plusieurs niveaux, notamment sur la manière de sélectionner et définir a priori les modèles utilisés.Dans ce contexte d’étude, plusieurs problématiques doivent être traitées.1/ Lors de l’utilisation d’une structure à modèles multiples, on considère en général deux à trois modèles. Ce choix est fait lors de la phase de conception de l’algorithme selon la connaissance du système et l’expertise de l’utilisateur. Cependant, il n’existe pas à notre connaissance d’outils ou de règles permettant de définir les types de mouvement à associer et leurs paramètres.2/ Il est préférable que le choix du ou des modèles de mouvement soit cohérent avec le type de cible pisté.3/ Lorsqu’un type de mouvement est utilisé, ses paramètres sont fixés a priori mais ces valeurs ne sont pas nécessairement adaptées à toutes les phases du mouvement. L’une des difficultés majeures réside dans la manière de définir et de faire évoluer la matrice de covariance du bruit de modèle. Le travail présenté dans ce mémoire vise à proposer des solutions algorithmiques aux problématiques précédentes afin d’améliorer l’estimation des trajectoires des cibles d’intérêt.Dans un premier temps, nous établissons une mesure de dissimilarité fondée sur la divergence de Jeffrey entre deux densités de probabilité associés à deux modèles d’état différents. Celle-ci est appliquée à la comparaison de modèles de mouvement. Elle est ensuite utilisée pour comparer un ensemble de plusieurs modèles d’état. Cette étude est alors mise à profit pour proposer une méthode de sélection a priori des modèles constituant des algorithmes à modèles multiples.Puis, nous présentons des modèles Bayésiens non-paramétriques (BNP) utilisant les processus de Dirichlet pour estimer les statistiques du bruit de modèle. Cette modélisation a l’avantage de pouvoir représenter des bruits multimodaux sans avoir à spécifier a priori le nombre de modes et leurs caractéristiques. Deux cas sont traités. Dans le premier, on estime la matrice de précision du bruit de modèle d’un unique modèle de mouvement sans émettre d’a priori sur sa structure.Dans le second, nous tirons profit des formes structurelles des matrices de précision associées aux modèles de mouvement pour n’estimer qu’un nombre réduit d’hyperparamètres. Pour les deux approches, l’estimation conjointe des paramètres cinématiques de la cible et de la matrice de précision du bruit de modèle est réalisée par filtrage particulaire. Les contributions apportées sont notamment le calcul de la distribution d’importance optimale dans chacun des cas.Enfin, nous tirons profit des méthodes dites de classification et pistage conjoints (joint tracking and classification -JTC-) pour mener simultanément la classification de la cible et l’inférence de ses paramètres. Dans ce cas, à chaque classe de cible est associé un ensemble de modèles d’évolution qui lui est propre. [...]
As part of the ground or maritime surveillance by using airborne radars, one of the mainobjectives is to detect and track a wide variety of targets over time. These treatments are generallybased on Bayesian filtering to estimate recursively the kinematic parameters (position,velocity and acceleration) of the targets. It is based on the state-space representation and moreparticularly on the prior modeling of the target evolutions (uniform motion, uniformly acceleratedmotion, movement rotational, etc.). If maneuvering targets are tracked, several motionmodels, each with a predefined dynamic, are typically combined in a multiple-model structure.Although these approaches are relevant, improvements can be made at several levels, includinghow to select and define a priori the models to be used.In this framework, several issues must be addressed.1 / When using a multiple-model structure, it is generally considered two to three models. Thischoice is made in the algorithm design stage according to the system knowledge and the userexpertise. However, it does not exist in our knowledge tools or/and rules to define the types ofmotions and their associated parameters.2 / It is preferable that the choice of the motion model(s) is consistent with the type of targetto be tracked.3 / When a type of motion model is used, its parameters are fixed a priori but these values ??arenot necessarily appropriate in all phases of the movement. One of the major challenges is theway to define the covariance matrix of the model noise and to model its evolution.The work presented in this thesis consists of algorithmic solutions to the previous problemsin order to improve the estimation of target trajectories.First, we establish a dissimilarity measure based on Jeffrey divergence between probability densitiesassociated with two different state models. It is applied to the comparison of motion models.It is then used to compare a set of several state models. This study is then harnessed to providea method for selecting a priori models constituting multiple-model algorithms.Then we present non-parametric Bayesian models (BNP) using the Dirichlet process to estimatemodel noise statistics. This model has the advantage of representing multimodal noises withoutspecifying a priori the number of modes and their features. Two cases are treated. In the firstone, the model noise precision matrix is estimated for a single motion model without issue ofany a priori on its structure. In the second one, we take advantage of the structural forms ofprecision matrices associated to motion models to estimate only a small number of hyperparameters.For both approaches, the joint estimation of the kinematic parameters of the target andthe precision matrix of the model noise is led by particle filtering. The contributions includecalculating the distribution optimal importance in each case.Finally, we take advantage of methods known as joint tracking and classification (JTC) forsimultaneously leading the classification of the target and the inference of its parameters. Inthis case, each target class is associated with a set of evolution models. In order to achievethe classification, we use the target position measurements and the target extent measurementscorresponding to the projection of the target length on the line of sight radar-target. Note that this approach is applied in a single target tracking context and a multiple-target environment
Styles APA, Harvard, Vancouver, ISO, etc.
42

Cardozo, Sandra Vergara. « Função da probabilidade da seleção do recurso (RSPF) na seleção de habitat usando modelos de escolha discreta ». Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-11032009-143806/.

Texte intégral
Résumé :
Em ecologia, o comportamento dos animais é freqüentemente estudado para entender melhor suas preferências por diferentes tipos de alimento e habitat. O presente trabalho esta relacionado a este tópico, dividindo-se em três capítulos. O primeiro capitulo refere-se à estimação da função da probabilidade da seleção de recurso (RSPF) comparado com um modelo de escolha discreta (DCM) com uma escolha, usando as estatísticas qui-quadrado para obter as estimativas. As melhores estimativas foram obtidas pelo método DCM com uma escolha. No entanto, os animais não fazem a sua seleção baseados apenas em uma escolha. Com RSPF, as estimativas de máxima verossimilhança, usadas pela regressão logística ainda não atingiram os objetivos, já que os animais têm mais de uma escolha. R e o software Minitab e a linguagem de programação Fortran foram usados para obter os resultados deste capítulo. No segundo capítulo discutimos mais a verossimilhança do primeiro capítulo. Uma nova verossimilhança para a RSPF é apresentada, a qual considera as unidades usadas e não usadas, e métodos de bootstrapping paramétrico e não paramétrico são usados para estudar o viés e a variância dos estimadores dos parâmetros, usando o programa FORTRAN para obter os resultados. No terceiro capítulo, a nova verossimilhança apresentada no capítulo 2 é usada com um modelo de escolha discreta, para resolver parte do problema apresentado no primeiro capítulo. A estrutura de encaixe é proposta para modelar a seleção de habitat de 28 corujas manchadas (Strix occidentalis), assim como a uma generalização do modelo logit encaixado, usando a maximização da utilidade aleatória e a RSPF aleatória. Métodos de otimização numérica, e o sistema computacional SAS, são usados para estimar os parâmetros de estrutura de encaixe.
In ecology, the behavior of animals is often studied to better understand their preferences for different types of habitat and food. The present work is concerned with this topic. It is divided into three chapters. The first concerns the estimation of a resource selection probability function (RSPF) compared with a discrete choice model (DCM) using chi-squared to obtain estimates. The best estimates were obtained by the DCM method. Nevertheless, animals were not selected based on choice alone. With RSPF, the maximum likelihood estimates used with the logistic regression still did not reach the objectives, since the animals have more than one choice. R and Minitab software and the FORTRAN programming language were used for the computations in this chapter. The second chapter discusses further the likelihood presented in the first chapter. A new likelihood for a RSPF is presented, which takes into account the units used and not used, and parametric and non-parametric bootstrapping are employed to study the bias and variance of parameter estimators, using a FORTRAN program for the calculations. In the third chapter, the new likelihood presented in chapter 2, with a discrete choice model is used to resolve a part of the problem presented in the first chapter. A nested structure is proposed for modelling selection by 28 spotted owls (Strix occidentalis) as well as a generalized nested logit model using random utility maximization and a random RSPF. Numerical optimization methods and the SAS system were employed to estimate the nested structural parameters.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Wu, Seung Kook. « Adaptive traffic control effect on arterial travel time charateristics ». Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31839.

Texte intégral
Résumé :
Thesis (Ph.D)--Civil and Environmental Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Hunter, Michael; Committee Member: Guensler, Randall; Committee Member: Leonard, John; Committee Member: Rodgers, Michael; Committee Member: Roshan J. Vengazhiyil. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Sjöwall, Fredrik. « Alternative Methods for Value-at-Risk Estimation : A Study from a Regulatory Perspective Focused on the Swedish Market ». Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-146217.

Texte intégral
Résumé :
The importance of sound financial risk management has become increasingly emphasised in recent years, especially with the financial crisis of 2007-08. The Basel Committee sets the international standards and regulations for banks and financial institutions, and in particular under market risk, they prescribe the internal application of the measure Value-at-Risk. However, the most established non-parametric Value-at-Risk model, historical simulation, has been criticised for some of its unrealistic assumptions. This thesis investigates alternative approaches for estimating non-parametric Value-at-Risk, by examining and comparing the capability of three counterbalancing weighting methodologies for historical simulation: an exponentially decreasing time weighting approach, a volatility updating method and, lastly, a more general weighting approach that enables the specification of central moments of a return distribution. With real financial data, the models are evaluated from a performance based perspective, in terms of accuracy and capital efficiency, but also in terms of their regulatory suitability, with a particular focus on the Swedish market. The empirical study shows that the capability of historical simulation is improved significantly, from both performance perspectives, by the implementation of a weighting methodology. Furthermore, the results predominantly indicate that the volatility updating model with a 500-day historical observation window is the most adequate weighting methodology, in all incorporated aspects. The findings of this paper offer significant input both to existing research on Value-at-Risk as well as to the quality of the internal market risk management of banks and financial institutions.
Betydelsen av sund finansiell riskhantering har blivit alltmer betonad på senare år, i synnerhet i och med finanskrisen 2007-08. Baselkommittén fastställer internationella normer och regler för banker och finansiella institutioner, och särskilt under marknadsrisk föreskriver de intern tillämpning av måttet Value-at-Risk. Däremot har den mest etablerade icke-parametriska Value-at-Risk-modellen, historisk simulering, kritiserats för några av dess orealistiska antaganden. Denna avhandling undersöker alternativa metoder för att beräkna icke-parametrisk Value-at‑Risk, genom att granska och jämföra prestationsförmågan hos tre motverkande viktningsmetoder för historisk simulering: en exponentiellt avtagande tidsviktningsteknik, en volatilitetsuppdateringsmetod, och slutligen ett mer generellt tillvägagångssätt för viktning som möjliggör specifikation av en avkastningsfördelnings centralmoment. Modellerna utvärderas med verklig finansiell data ur ett prestationsbaserat perspektiv, utifrån precision och kapitaleffektivitet, men också med avseende på deras lämplighet i förhållande till existerande regelverk, med särskilt fokus på den svenska marknaden. Den empiriska studien visar att prestandan hos historisk simulering förbättras avsevärt, från båda prestationsperspektiven, genom införandet av en viktningsmetod. Dessutom pekar resultaten i huvudsak på att volatilitetsuppdateringsmodellen med ett 500 dagars observationsfönster är den mest användbara viktningsmetoden i alla berörda aspekter. Slutsatserna i denna uppsats bidrar i väsentlig grad både till befintlig forskning om Value-at-Risk, liksom till kvaliteten på bankers och finansiella institutioners interna hantering av marknadsrisk.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Salloum, Zahraa. « Maximum de vraisemblance empirique pour la détection de changements dans un modèle avec un nombre faible ou très grand de variables ». Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1008/document.

Texte intégral
Résumé :
Cette thèse est consacrée à tester la présence de changements dans les paramètres d'un modèle de régression non-linéaire ainsi que dans un modèle de régression linéaire en très grande dimension. Tout d'abord, nous proposons une méthode basée sur la vraisemblance empirique pour tester la présence de changements dans les paramètres d'un modèle de régression non-linéaire. Sous l'hypothèse nulle, nous prouvons la consistance et la vitesse de convergence des estimateurs des paramètres de régression. La loi asymptotique de la statistique de test sous l'hypothèse nulle nous permet de trouver la valeur critique asymptotique. D'autre part, nous prouvons que la puissance asymptotique de la statistique de test proposée est égale à 1. Le modèle épidémique avec deux points de rupture est également étudié. Ensuite, on s'intéresse à construire les régions de confiance asymptotiques pour la différence entre les paramètres de deux phases d'un modèle non-linéaire avec des regresseurs aléatoires en utilisant la méthode de vraisemblance empirique. On montre que le rapport de la vraisemblance empirique a une distribution asymptotique χ2. La méthode de vraisemblance empirique est également utilisée pour construire les régions de confiance pour la différence entre les paramètres des deux phases d'un modèle non-linéaire avec des variables de réponse manquantes au hasard (Missing At Random (MAR)). Afin de construire les régions de confiance du paramètre en question, on propose trois statistiques de vraisemblance empirique : la vraisemblance empirique basée sur les données cas-complète, la vraisemblance empirique pondérée et la vraisemblance empirique par des valeurs imputées. On prouve que les trois rapports de vraisemblance empirique ont une distribution asymptotique χ2. Un autre but de cette thèse est de tester la présence d'un changement dans les coefficients d'un modèle linéaire en grande dimension, où le nombre des variables du modèle peut augmenter avec la taille de l'échantillon. Ce qui conduit à tester l'hypothèse nulle de non-changement contre l'hypothèse alternative d'un seul changement dans les coefficients de régression. Basée sur les comportements asymptotiques de la statistique de rapport de vraisemblance empirique, on propose une simple statistique de test qui sera utilisée facilement dans la pratique. La normalité asymptotique de la statistique de test proposée sous l'hypothèse nulle est prouvée. Sous l'hypothèse alternative, la statistique de test diverge
In this PHD thesis, we propose a nonparametric method based on the empirical likelihood for detecting the change in the parameters of nonlinear regression models and the change in the coefficient of linear regression models, when the number of model variables may increase as the sample size increases. Firstly, we test the null hypothesis of no-change against the alternative of one change in the regression parameters. Under null hypothesis, the consistency and the convergence rate of the regression parameter estimators are proved. The asymptotic distribution of the test statistic under the null hypothesis is obtained, which allows to find the asymptotic critical value. On the other hand, we prove that the proposed test statistic has the asymptotic power equal to 1. The epidemic model, a particular case of model with two change-points, under the alternative hypothesis, is also studied. Afterwards, we use the empirical likelihood method for constructing the confidence regions for the difference between the parameters of a two-phases nonlinear model with random design. We show that the empirical likelihood ratio has an asymptotic χ2 distribu- tion. Empirical likelihood method is also used to construct the confidence regions for the difference between the parameters of a two-phases nonlinear model with response variables missing at randoms (MAR). In order to construct the confidence regions of the parameter in question, we propose three empirical likelihood statistics : empirical likelihood based on complete-case data, weighted empirical likelihood and empirical likelihood with imputed va- lues. We prove that all three empirical likelihood ratios have asymptotically χ2 distributions. An another aim for this thesis is to test the change in the coefficient of linear regres- sion models for high-dimensional model. This amounts to testing the null hypothesis of no change against the alternative of one change in the regression coefficients. Based on the theoretical asymptotic behaviour of the empirical likelihood ratio statistic, we propose, for a deterministic design, a simpler test statistic, easier to use in practice. The asymptotic normality of the proposed test statistic under the null hypothesis is proved, a result which is different from the χ2 law for a model with a fixed variable number. Under alternative hypothesis, the test statistic diverges
Styles APA, Harvard, Vancouver, ISO, etc.
46

Koch, Erwan. « Outils et modèles pour l'étude de quelques risques spatiaux et en réseaux : application aux extrêmes climatiques et à la contagion en finance ». Thesis, Lyon 1, 2014. http://www.theses.fr/2014LYO10138/document.

Texte intégral
Résumé :
Cette thèse s’attache à développer des outils et modèles adaptés a l’étude de certains risques spatiaux et en réseaux. Elle est divisée en cinq chapitres. Le premier consiste en une introduction générale, contenant l’état de l’art au sein duquel s’inscrivent les différents travaux, ainsi que les principaux résultats obtenus. Le Chapitre 2 propose un nouveau générateur de précipitations multi-site. Il est important de disposer de modèles capables de produire des séries de précipitations statistiquement réalistes. Alors que les modèles précédemment introduits dans la littérature concernent essentiellement les précipitations journalières, nous développons un modèle horaire. Il n’implique qu’une seule équation et introduit ainsi une dépendance entre occurrence et intensité, processus souvent considérés comme indépendants dans la littérature. Il comporte un facteur commun prenant en compte les conditions atmosphériques grande échelle et un terme de contagion auto-regressif multivarié, représentant la propagation locale des pluies. Malgré sa relative simplicité, ce modèle reproduit très bien les intensités, les durées de sècheresse ainsi que la dépendance spatiale dans le cas de la Bretagne Nord. Dans le Chapitre 3, nous proposons une méthode d’estimation des processus maxstables, basée sur des techniques de vraisemblance simulée. Les processus max-stables sont très adaptés à la modélisation statistique des extrêmes spatiaux mais leur estimation s’avère délicate. En effet, la densité multivariée n’a pas de forme explicite et les méthodes d’estimation standards liées à la vraisemblance ne peuvent donc pas être appliquées. Sous des hypothèses adéquates, notre estimateur est efficace quand le nombre d’observations temporelles et le nombre de simulations tendent vers l’infini. Cette approche par simulation peut être utilisée pour de nombreuses classes de processus max-stables et peut fournir de meilleurs résultats que les méthodes actuelles utilisant la vraisemblance composite, notamment dans le cas où seules quelques observations temporelles sont disponibles et où la dépendance spatiale est importante
This thesis aims at developing tools and models that are relevant for the study of some spatial risks and risks in networks. The thesis is divided into five chapters. The first one is a general introduction containing the state of the art related to each study as well as the main results. Chapter 2 develops a new multi-site precipitation generator. It is crucial to dispose of models able to produce statistically realistic precipitation series. Whereas previously introduced models in the literature deal with daily precipitation, we develop a hourly model. The latter involves only one equation and thus introduces dependence between occurrence and intensity; the aforementioned literature assumes that these processes are independent. Our model contains a common factor taking large scale atmospheric conditions into account and a multivariate autoregressive contagion term accounting for local propagation of rainfall. Despite its relative simplicity, this model shows an impressive ability to reproduce real intensities, lengths of dry periods as well as the spatial dependence structure. In Chapter 3, we propose an estimation method for max-stable processes, based on simulated likelihood techniques. Max-stable processes are ideally suited for the statistical modeling of spatial extremes but their inference is difficult. Indeed the multivariate density function is not available and thus standard likelihood-based estimation methods cannot be applied. Under appropriate assumptions, our estimator is efficient as both the temporal dimension and the number of simulation draws tend towards infinity. This approach by simulation can be used for many classes of max-stable processes and can provide better results than composite-based methods, especially in the case where only a few temporal observations are available and the spatial dependence is high
Styles APA, Harvard, Vancouver, ISO, etc.
47

Hadrich, Ben Arab Atizez. « Étude des fonctions B-splines pour la fusion d'images segmentées par approche bayésienne ». Thesis, Littoral, 2015. http://www.theses.fr/2015DUNK0385/document.

Texte intégral
Résumé :
Dans cette thèse nous avons traité le problème de l'estimation non paramétrique des lois de probabilités. Dans un premier temps, nous avons supposé que la densité inconnue f a été approchée par un mélange de base B-spline quadratique. Puis, nous avons proposé un nouvel estimateur de la densité inconnue f basé sur les fonctions B-splines quadratiques, avec deux méthodes d'estimation. La première est base sur la méthode du maximum de vraisemblance et la deuxième est basée sur la méthode d'estimation Bayésienne MAP. Ensuite, nous avons généralisé notre étude d'estimation dans le cadre du mélange et nous avons proposé un nouvel estimateur du mélange de lois inconnues basé sur les deux méthodes d'estimation adaptées. Dans un deuxième temps, nous avons traité le problème de la segmentation statistique semi supervisée des images en se basant sur le modèle de Markov caché et les fonctions B-splines. Nous avons montré l'apport de l'hybridation du modèle de Markov caché et les fonctions B-splines en segmentation statistique bayésienne semi supervisée des images. Dans un troisième temps, nous avons présenté une approche de fusion basée sur la méthode de maximum de vraisemblance, à travers l'estimation non paramétrique des probabilités, pour chaque pixel de l'image. Nous avons ensuite appliqué cette approche sur des images multi-spectrales et multi-temporelles segmentées par notre algorithme non paramétrique et non supervisé
In this thesis we are treated the problem of nonparametric estimation probability distributions. At first, we assumed that the unknown density f was approximated by a basic mixture quadratic B-spline. Then, we proposed a new estimate of the unknown density function f based on quadratic B-splines, with two methods estimation. The first is based on the maximum likelihood method and the second is based on the Bayesian MAP estimation method. Then we have generalized our estimation study as part of the mixture and we have proposed a new estimator mixture of unknown distributions based on the adapted estimation of two methods. In a second time, we treated the problem of semi supervised statistical segmentation of images based on the hidden Markov model and the B-sline functions. We have shown the contribution of hybridization of the hidden Markov model and B-spline functions in unsupervised Bayesian statistical image segmentation. Thirdly, we presented a fusion approach based on the maximum likelihood method, through the nonparametric estimation of probabilities, for each pixel of the image. We then applied this approach to multi-spectral and multi-temporal images segmented by our nonparametric and unsupervised algorithm
Styles APA, Harvard, Vancouver, ISO, etc.
48

Caron, Emmanuel. « Comportement des estimateurs des moindres carrés du modèle linéaire dans un contexte dépendant : Étude asymptotique, implémentation, exemples ». Thesis, Ecole centrale de Nantes, 2019. http://www.theses.fr/2019ECDN0036.

Texte intégral
Résumé :
Dans cette thèse, nous nous intéressons au modèle de régression linéaire usuel dans le cas où les erreurs sont supposées strictement stationnaires. Nous utilisons un résultat de Hannan (1973) qui a prouvé un Théorème Limite Central pour l’estimateur des moindres carrés sous des conditions très générales sur le design et le processus des erreurs. Pour un design et un processus d’erreurs vérifiant les conditions d’Hannan, nous définissons un estimateur de la matrice de covariance asymptotique de l’estimateur des moindres carrés et nous prouvons sa consistance sous des conditions très générales. Ensuite nous montrons comment modifier les tests usuels sur le paramètre du modèle linéaire dans ce contexte dépendant. Nous proposons différentes approches pour estimer la matrice de covariance afin de corriger l’erreur de première espèce des tests. Le paquet R slm que nous avons développé contient l’ensemble de ces méthodes statistiques. Les procédures sont évaluées à travers différents ensembles de simulations et deux exemples particuliers de jeux de données sont étudiés. Enfin, dans le dernier chapitre, nous proposons une méthode non-paramétrique par pénalisation pour estimer la fonction de régression dans le cas où les erreurs sont gaussiennes et corrélées
In this thesis, we consider the usual linear regression model in the case where the error process is assumed strictly stationary.We use a result from Hannan (1973) who proved a Central Limit Theorem for the usual least squares estimator under general conditions on the design and on the error process. Whatever the design and the error process satisfying Hannan’s conditions, we define an estimator of the asymptotic covariance matrix of the least squares estimator and we prove its consistency under very mild conditions. Then we show how to modify the usual tests on the parameter of the linear model in this dependent context. We propose various methods to estimate the covariance matrix in order to correct the type I error rate of the tests. The R package slm that we have developed contains all of these statistical methods. The procedures are evaluated through different sets of simulations and two particular examples of datasets are studied. Finally, in the last chapter, we propose a non-parametric method by penalization to estimate the regression function in the case where the errors are Gaussian and correlated
Styles APA, Harvard, Vancouver, ISO, etc.
49

Rodrigues, Christelle. « Optimisation des posologies des antiépileptiques chez l’enfant à partir de données pharmacocinétiques pédiatriques et adultes Population pharmacokinetics of oxcarbazepine and its monohydroxy derivative in epileptic children A population pharmacokinetic model taking into account protein binding for the sustained-release granule formulation of valproic acid in children with epilepsy Conditional non-parametric bootstrap for non-linear mixed effect models Pharmacokinetics evaluation of vigabatrin dose for the treatment of refractory focal seizures in children using adult and pediatric data Pharmacokinetic extrapolation from adult to children in case of nonlinear elimination : a case study ». Thesis, Sorbonne Paris Cité, 2018. https://wo.app.u-paris.fr/cgi-bin/WebObjects/TheseWeb.woa/wa/show?t=2398&f=17336.

Texte intégral
Résumé :
Les enfants diffèrent des adultes non seulement en termes de dimension corporelle mais aussi en termes physiologiques. En effet, les phénomènes de développement et maturation interviennent au cours de la croissance. Ces processus ne sont pas linéaires et induisent des différences pharmacocinétiques et pharmacodynamiques. Ainsi, contrairement à la pratique commune, il n’est pas approprié de déterminer les posologies pédiatriques directement à partir des doses adultes. Étudier la pharmacocinétique chez l’enfant est fondamental pour pouvoir déterminer les posologies à administrer. La méthodologie idéale est l’analyse de population à travers des modèles non-linéaires à effets mixtes. Cependant, même si cette méthode permet l’analyse de données éparses et déséquilibrées, le manque de données individuelles doit être compensé par l’inclusion de plus d’individus. Cela pose un problème lorsque l’indication du traitement est une maladie rare, comme le sont les syndromes épileptiques de l’enfance. Dans ce cas, l’extrapolation de modèles adultes à la population pédiatrique peut s’avérer avantageuse. L’objectif de ce travail de thèse était d’évaluer les recommandations posologiques d’antiépileptiques lorsque des données pharmacocinétiques pédiatriques sont suffisamment informatives pour permettre la construction d’un modèle, ou lorsque celles-ci ne sont pas suffisamment importantes ou ne peuvent pas être exploitées correctement. Dans un premier temps, un modèle parent-métabolite de l’oxcarbazépine et de son dérivé mono-hydroxylé (MHD) a été développé chez l’enfant épileptique âgé de 2 à 12 ans. Ce modèle a permis de mettre en évidence que les plus jeunes enfants nécessitent des doses plus élevées, ainsi que les patients co-traités avec des inducteurs enzymatiques. Un modèle a aussi été développé pour les enfants épileptiques de 1 à 18 ans traités avec la formulation de microsphères à libération prolongée d’acide valproïque. Ce modèle a tenu en compte le flip-flop associé à la formulation et la relation non-linéaire entre la clairance et la dose due à la liaison protéique saturable de façon mécanistique. Encore une fois, il a été mis en évidence le besoin de doses plus élevées pour les enfants plus jeunes. Puis, un modèle adulte du vigabatrin a été extrapolé à l’enfant pour déterminer les posologies permettant d’atteindre des expositions similaires à l’adulte pour traiter les épilepsies focales résistantes. A partir des résultats obtenus, qui sont en accord avec les conclusions d’essais cliniques, nous avons pu proposer une dose de maintenance idéale dans cette indication. Enfin, nous avons étudié la pertinence de l’extrapolation par allométrie théorique dans un contexte de non-linéarité avec l’exemple du stiripentol. Nous avons pu en conclure que cette méthode semble apporter de bonnes prédictions à partir de l’âge de 8 ans, contrairement aux molécules à élimination linéaire où cela semble correct à partir de 5 ans. En conclusion, nous avons pu tester et comparer différentes approches pour aider à la détermination de recommandations posologiques chez l’enfant. L’étude de la pharmacocinétique pédiatrique par des essais spécifiques reste indispensable au bon usage du médicament
Children greatly differ from adults not only in terms of size but also in physiological terms. Indeed, developmental changes occur during growth due to maturation. These processes occur in a nonlinear fashion and can cause pharmacokinetic and pharmacodynamic differences. Thus, oppositely to common practice, it is not appropriate to scale pediatric doses directly and linearly from adults. The study of pharmacokinetics in children is then essential to determine those pediatric dosages. The more commonly used methodology is population analysis through non-linear mixed effects models. This method allows the analysis of sparse and unbalanced data. In return, the lack of individual data has to be balanced with the inclusion of more individuals. This can be a problem when the indication of treatment is a rare disease, as are epileptic syndromes of childhood. In this case, extrapolation of adult pharmacokinetic models to the pediatric population may be interesting. The objective of this thesis was to evaluate the dosage recommendations of antiepileptic drugs when pediatric pharmacokinetic data are sufficient to be modeled, and when they are not, extrapolating adequately adult information. Firstly, a parent-metabolite model of oxcarbazepine and its monohydroxy derivative (MHD) was developed in epileptic children aged 2 to 12 years. This model showed that younger children require higher doses, as well as patients co-treated with enzyme inducers. A model was also developed for epileptic children aged 1 to 18 years treated with a valproic acid sustained release microsphere formulation. This model took into account the flip-flop associated with the formulation and the non-linear relationship between clearance and dose caused by a saturable protein binding. Again, the need for higher doses for younger children was highlighted. Then, an adult model of vigabatrin was extrapolated to children to determine which doses allow to achieve exposures similar to adults in resistant focal onset seizures. From the results obtained, which are in agreement with the conclusions of clinical trials, we have been able to propose an ideal maintenance dose for this indication. Finally, we studied the relevance of extrapolation by theoretical allometry in a context of non-linearity with the example of stiripentol. We concluded that this method seems to provide good predictions from the age of 8, unlike the linear elimination molecules where it seems correct from 5 years. In conclusion, we were able to test and compare different approaches to help determine dosing recommendations in children. The study of pediatric pharmacokinetics in specific trials remains essential for the proper use of drugs
Styles APA, Harvard, Vancouver, ISO, etc.
50

Aabid, Sami El. « Méthode basée modèle pour le diagnostic de l'état de santé d'une pile à combustible PEMFC en vue de sa maintenance ». Thesis, Toulouse, INPT, 2020. http://www.theses.fr/2020INPT0011.

Texte intégral
Résumé :
Les piles à combustible se positionnent aujourd’hui comme une alternative technologique séduisante face aux solutions classiquement utilisées pour le stockage d’énergie. De par leur rendement de conversion en énergie électrique et leur haute densité énergétique, les grands acteurs du secteur aéronautique en voient une solution intéressante pour réduire l’impact environnemental des avions. C’est dans cette optique que s’inscrit la présente thèse, visant à contribuer au développement de méthodologies destinées au suivi de l’état de santé d’une pile à combustible à membrane échangeuse de protons (Proton Exchange Membrane Fuel Cell PEMFC). Il a été montré, dans un premier temps, que l’ensemble des constituants d’une pile était soumis à des contraintes pouvant engendrer des défaillances irréversibles ou des pertes de performances. Cette synthèse a permis de mettre en exergue la nécessité de disposer de méthodes de diagnostic permettant de suivre l’état de santé de la pile. Dans ce sens, des outils basés sur le principe de l’identification paramétrique ont été mis en avant. Il s’agit dans un premier temps de pouvoir, à partir de caractérisations expérimentales, être capable de mettre en œuvre une procédure appropriée permettant d’identifier les paramètres d’un modèle représentant au mieux le comportement du composant. Et dans un temps second, de construire par l’intermédiaire des paramètres identifiés des indicateurs (signatures) liés à l’état de santé de la pile. Dans le cadre de cette thèse, nous nous sommes intéressés plus particulièrement à deux types de prise d’informations : la courbe de polarisation (V-I), et la Spectroscopie d’Impédance Electrochimique (SIE). Il a été montré dans cette thèse, que seule l’exploitation conjointe de ces 2 types de mesures permet une caractérisation pertinente de l’état de santé de la pile. Deux modèles «couplés» ont été ainsi développés : un modèle quasi-statique dont les paramètres sont identifiés à partir de la courbe de polarisation et un modèle dynamique identifié à partir des données de la SIE. Il a été mis en avant dans le cadre de cette thèse, la nécessité de développer un modèle dynamique dit «sans a priori» dont la formulation peut varier au cours du temps. Ainsi, si des phénomènes liés à un changement de caractéristique apparaissent, la structure du modèle pourra s’adapter pour en permettre la prise en compte. Le processus global partant de la connaissance a priori jusqu’à l’identification des paramètres des modèles a été développé au cours des chapitres de cette thèse. Outre la bonne reproduction des données expérimentales et la séparation des pertes dans les domaines quasi-statique et dynamique, l’approche permet de percevoir certaines défaillances à travers les paramètres des modèles développés. La prise en compte du couplage statique-dynamique a fait apparaître la notion « d’impédance résiduelle ». En effet, un biais entre la pente locale de la courbe de polarisation et la résistance basse fréquence de la SIE est systématiquement observé. L’impédance résiduelle prise en compte dans la modélisation permet d’absorber ce décalage tout en garantissant une cohérence entre les mesures de la V-I et de la SIE. Une tentative d’explication des phénomènes physico-chimiques liés à cette impédance a également fait partie de l’ensemble des objectifs de cette thèse. D’un point de vue expérimental, l’idée du travail est dans un premier temps de générer des variations ciblées et maîtrisées du comportement de la pile et d’observer leurs impacts sur les paramètres identifiés. Pour ce faire, l’idée proposée est de travailler sur une mono-cellule dont la modification des composants est aisée. Le jeu de composants (membranes différentes, différents dosages en platine …) avait permis de mettre en évidence l’impact de chaque modification sur les données expérimentales et ainsi sur le modèle avant de tester la validité de l’approche sur des campagnes de vieillissement de stacks
Nowadays, Fuel cells (FCs) are considered as an attractive technological solution for energy storage. In addition to their high efficiency conversion to electrical energy and their high energy density, FCs are a potential candidate to reduce the environmental impact of aircrafts. The present PhD thesis can be located within this context, and especially contributes to the development of methodologies dedicated to the monitoring of the state of health (SoH) of Proton Exchange Membrane Fuel Cells (PEMFCs). FCs are submitted to ageing and various operating conditions leading to several failures or abnormal operation modes. Hence, there is a need to develop tools dedicated to the diagnosis and fuel cell ageing monitoring. One of reliable approaches used for the FC SoH monitoring is based on parametric identification of a model through experimental data. Widely used for the FC characterization, the polarization curve (V-I) and the Electrochemical Impedance Spectroscopy (EIS) coupled with a model describing the involved phenomena may provide further information about the FC SoH. Two models were thus developed: a quasi-static model whose parameters are identified from the polarization curve and a dynamic one identified from EIS data. The need to develop a dynamic model whose formulation may vary over time “without a priori” has been reported in this thesis. The original approach of this thesis is to consider conjointly both characterizations during all the proposed analysis process. This global strategy ensures the separation of the different fuel cell phenomena in the quasi-static and dynamic domains by introducing into each parametrization process (one for the quasi-static model and one for the dynamic model) parameters and/or laws stemming from the other part. The global process starting from the a priori knowledge until the identification of the models parameters was developed during the chapters of this thesis. In addition to the good reproduction of experimental data and the separation of the losses in both static and dynamic domains, the method makes it possible to monitor the FC SoH via the evolution of models parameters. The fact to take into account the coupling between quasi-static and dynamic models revealed the notion of a “residualimpedance”. This impedance makes it possible to overcome the recurrent experimental observation made by the daily users of EIS: there is a not-clearly explained difference between the low frequency resistance of the EIS and the slope of the polarization curve for a given currentndensity. Theoretically the two quantities have to tend towards the same value. In others words, a part of the impedance spectra is not clearly and easily exploitable to characterize fuel cell performance. This topic has been discussed in the literature in the last years. An attempt to explain physico-chemical phenomena related to this impedance is also a part of objectives of this thesis. From an experimental point of view, before applying this method to ageing monitoring, it was indeed necessary to “calibrate” it regarding its relative complexity. In this way, experiments with a single cell with different sets of internal components (different membrane thicknesses and different platinum loadings in the Active Layer (AL)) were achieved and analyzed by applying the proposed method. Therefore, the method was evaluated in the framework of three ageing campaigns carried out with three 1 kW PEM stacks
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie