Дисертації з теми "Imputation de Valeurs manquantes"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-36 дисертацій для дослідження на тему "Imputation de Valeurs manquantes".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Bernard, Francis. "Méthodes d'analyse des données incomplètes incorporant l'incertitude attribuable aux valeurs manquantes." Mémoire, Université de Sherbrooke, 2013. http://hdl.handle.net/11143/6571.
Etourneau, Lucas. "Contrôle du FDR et imputation de valeurs manquantes pour l'analyse de données de protéomiques par spectrométrie de masse." Electronic Thesis or Diss., Université Grenoble Alpes, 2024. http://www.theses.fr/2024GRALS001.
Proteomics involves characterizing the proteome of a biological sample, that is, the set of proteins it contains, and doing so as exhaustively as possible. By identifying and quantifying protein fragments that are analyzable by mass spectrometry (known as peptides), proteomics provides access to the level of gene expression at a given moment. This is crucial information for improving the understanding of molecular mechanisms at play within living organisms. These experiments produce large amounts of data, often complex to interpret and subject to various biases. They require reliable data processing methods that ensure a certain level of quality control, as to guarantee the relevance of the resulting biological conclusions.The work of this thesis focuses on improving this data processing, and specifically on the following two major points:The first is controlling for the false discovery rate (FDR), when either identifying (1) peptides or (2) quantitatively differential biomarkers between a tested biological condition and its negative control. Our contributions focus on establishing links between the empirical methods stemmed for proteomic practice and other theoretically supported methods. This notably allows us to provide directions for the improvement of FDR control methods used for peptide identification.The second point focuses on managing missing values, which are often numerous and complex in nature, making them impossible to ignore. Specifically, we have developed a new algorithm for imputing them that leverages the specificities of proteomics data. Our algorithm has been tested and compared to other methods on multiple datasets and according to various metrics, and it generally achieves the best performance. Moreover, it is the first algorithm that allows imputation following the trending paradigm of "multi-omics": if it is relevant to the experiment, it can impute more reliably by relying on transcriptomic information, which quantifies the level of messenger RNA expression present in the sample. Finally, Pirat is implemented in a freely available software package, making it easy to use for the proteomic community
Morisot, Adeline. "Méthodes d’analyse de survie, valeurs manquantes et fractions attribuables temps dépendantes : application aux décès par cancer de la prostate." Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTT010/document.
The term survival analysis refers to methods used for modeling the time of occurrence of one or more events taking censoring into account. The event of interest may be either the onset or the recurrence of a disease, or death. The causes of death may have missing values, a status that may be modeled by imputation methods. In the first section of this thesis we made a review of the methods used to deal with these missing data. Then, we detailed the procedures that enable multiple imputation of causes of death. We have developed these methods in a subset of the ERSPC (European Randomized Study of Screening for Prostate Cancer), which studied screening and mortality for prostate cancer. We proposed a theoretical formulation of Rubin rules after a complementary log-log transformation to combine estimates of survival. In addition, we provided the related R code. In a second section, we presented the survival analysis methods, by proposing a unified writing based on the definitions of crude and net survival, while considering either all-cause or specific cause of death. This involves consideration of censoring which can then be informative. We considered the so-called traditional methods (Kaplan-Meier, Nelson-Aalen, Cox and parametric) methods of competing risks (considering a multistate model or a latent failure time model), methods called specific that are corrected using IPCW (Inverse Ponderation Censoring Weighting) and relative survival methods. The classical methods are based on a non-informative censoring assumption. When we are interested in deaths from all causes, this assumption is often valid. However, for a particular cause of death, other causes of death are considered as a censoring. In this case, censoring by other causes of death is generally considered informative. We introduced an approach based on the IPCW method to correct this informative censoring, and we provided an R function to apply this approach directly. All methods presented in this chapter were applied to datasets completed by multiple imputation. Finally, in a last part we sought to determine the percentage of deaths explained by one or more variables using attributable fractions. We presented the theoretical formulations of attributable fractions, time-independent and time-dependent that are expressed as survival. We illustrated these concepts using all the survival methods presented in section 2, and compared the results. Estimates obtained with the different methods were very similar
Chion, Marie. "Développement de nouvelles méthodologies statistiques pour l'analyse de données de protéomique quantitative." Thesis, Strasbourg, 2021. http://www.theses.fr/2021STRAD025.
Proteomic analysis consists of studying all the proteins expressed by a given biological system, at a given time and under given conditions. Recent technological advances in mass spectrometry and liquid chromatography make it possible to envisage large-scale and high-throughput proteomic studies.This thesis work focuses on developing statistical methodologies for the analysis of quantitative proteomics data and thus presents three main contributions. The first part proposes to use monotone spline regression models to estimate the amounts of all peptides detected in a sample using internal standards labelled for a subset of targeted peptides. The second part presents a strategy to account for the uncertainty induced by the multiple imputation process in the differential analysis, also implemented in the mi4p R package. Finally, the third part proposes a Bayesian framework for differential analysis, making it notably possible to consider the correlations between the intensities of peptides
Moreno, Betancur Margarita. "Regression modeling with missing outcomes : competing risks and longitudinal data." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA11T076/document.
Missing data are a common occurrence in medical studies. In regression modeling, missing outcomes limit our capability to draw inferences about the covariate effects of medical interest, which are those describing the distribution of the entire set of planned outcomes. In addition to losing precision, the validity of any method used to draw inferences from the observed data will require that some assumption about the mechanism leading to missing outcomes holds. Rubin (1976, Biometrika, 63:581-592) called the missingness mechanism MAR (for “missing at random”) if the probability of an outcome being missing does not depend on missing outcomes when conditioning on the observed data, and MNAR (for “missing not at random”) otherwise. This distinction has important implications regarding the modeling requirements to draw valid inferences from the available data, but generally it is not possible to assess from these data whether the missingness mechanism is MAR or MNAR. Hence, sensitivity analyses should be routinely performed to assess the robustness of inferences to assumptions about the missingness mechanism. In the field of incomplete multivariate data, in which the outcomes are gathered in a vector for which some components may be missing, MAR methods are widely available and increasingly used, and several MNAR modeling strategies have also been proposed. On the other hand, although some sensitivity analysis methodology has been developed, this is still an active area of research. The first aim of this dissertation was to develop a sensitivity analysis approach for continuous longitudinal data with drop-outs, that is, continuous outcomes that are ordered in time and completely observed for each individual up to a certain time-point, at which the individual drops-out so that all the subsequent outcomes are missing. The proposed approach consists in assessing the inferences obtained across a family of MNAR pattern-mixture models indexed by a so-called sensitivity parameter that quantifies the departure from MAR. The approach was prompted by a randomized clinical trial investigating the benefits of a treatment for sleep-maintenance insomnia, from which 22% of the individuals had dropped-out before the study end. The second aim was to build on the existing theory for incomplete multivariate data to develop methods for competing risks data with missing causes of failure. The competing risks model is an extension of the standard survival analysis model in which failures from different causes are distinguished. Strategies for modeling competing risks functionals, such as the cause-specific hazards (CSH) and the cumulative incidence function (CIF), generally assume that the cause of failure is known for all patients, but this is not always the case. Some methods for regression with missing causes under the MAR assumption have already been proposed, especially for semi-parametric modeling of the CSH. But other useful models have received little attention, and MNAR modeling and sensitivity analysis approaches have never been considered in this setting. We propose a general framework for semi-parametric regression modeling of the CIF under MAR using inverse probability weighting and multiple imputation ideas. Also under MAR, we propose a direct likelihood approach for parametric regression modeling of the CSH and the CIF. Furthermore, we consider MNAR pattern-mixture models in the context of sensitivity analyses. In the competing risks literature, a starting point for methodological developments for handling missing causes was a stage II breast cancer randomized clinical trial in which 23% of the deceased women had missing cause of death. We use these data to illustrate the practical value of the proposed approaches
Fiot, Céline. "Extraction de séquences fréquentes : des données numériques aux valeurs manquantes." Phd thesis, Montpellier 2, 2007. http://www.theses.fr/2007MON20056.
Fiot, Céline. "Extraction de séquences fréquentes : des données numériques aux valeurs manquantes." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2007. http://tel.archives-ouvertes.fr/tel-00179506.
Audigier, Vincent. "Imputation multiple par analyse factorielle : Une nouvelle méthodologie pour traiter les données manquantes." Thesis, Rennes, Agrocampus Ouest, 2015. http://www.theses.fr/2015NSARG015/document.
This thesis proposes new multiple imputation methods that are based on principal component methods, which were initially used for exploratory analysis and visualisation of continuous, categorical and mixed multidimensional data. The study of principal component methods for imputation, never previously attempted, offers the possibility to deal with many types and sizes of data. This is because the number of estimated parameters is limited due to dimensionality reduction.First, we describe a single imputation method based on factor analysis of mixed data. We study its properties and focus on its ability to handle complex relationships between variables, as well as infrequent categories. Its high prediction quality is highlighted with respect to the state-of-the-art single imputation method based on random forests.Next, a multiple imputation method for continuous data using principal component analysis (PCA) is presented. This is based on a Bayesian treatment of the PCA model. Unlike standard methods based on Gaussian models, it can still be used when the number of variables is larger than the number of individuals and when correlations between variables are strong.Finally, a multiple imputation method for categorical data using multiple correspondence analysis (MCA) is proposed. The variability of prediction of missing values is introduced via a non-parametric bootstrap approach. This helps to tackle the combinatorial issues which arise from the large number of categories and variables. We show that multiple imputation using MCA outperforms the best current methods
RAGEL, ARNAUD. "Exploration des bases incompletes application a l'aide au pretraitement des valeurs manquantes." Caen, 1999. http://www.theses.fr/1999CAEN2067.
Ben, Othman Leila. "Conception et validation d'une méthode de complétion des valeurs manquantes fondée sur leurs modèles d'apparition." Phd thesis, Université de Caen, 2011. http://tel.archives-ouvertes.fr/tel-01017941.
Ben, Othman Amroussi Leila. "Conception et validation d’une méthode de complétion des valeurs manquantes fondée sur leurs modèles d’apparition." Caen, 2011. http://www.theses.fr/2011CAEN2067.
Knowledge Discovery from incomplete databases is a thriving research area. In this thesis, the main focus is put on the proposal of a missing values completion method. We start approaching this issue by defining the appearing models of the missing values. We thus propose a new typology according to the given data and we characterize these missing values in a non-redundant manner defined by means of the basis of proper implications. An algorithm computing this basis of rules, heavily relying on the hypergraph theory battery of results, is also introduced in this thesis. We then explore the information provided during the characterization stage in order to propose a new contextual completion method. The latter completes the missing values with respect to their type as well as to their appearance context. The non-random missing values are completed with special values intrinsically containing the explanation defined by the characterization schemes. Finally, we investigate the evaluation techniques of the missing values completion methods and we introduce a new technique based on the stability of a clustering, when applied on reference data and completed ones
Lorga, Da Silva Ana. "Tratamento de dados omissos e métodos de imputação em classificação." Doctoral thesis, Instituto Superior de Economia e Gestão, 2005. http://hdl.handle.net/10400.5/3849.
Neste trabalho, pretende-se estudar o efeito dos dados omissos em classificação de variáveis, principalmente em classificação hierárquica ascendente, de acordo com.òs seguintes factores: percentagens de dados omissos, métodos de imputação, coeficientes de semelhança-e métodos de classificação. Supõe-se que os dados omissos são do tipo MAR ("missing at random"), isto é, a presença de dados omissos não depende dos valores omissos, nem das variáveis com dados omissos, mas depende de valores observados sobre outras variáveis da matriz de dados. Os dados omissos satisfazem um padrão maioritariamente monótono. Utilizaram-se as técnicas, em presença de dados omissos "listwise" e "pairwise"; como métodos de imputação simples: o algoritmo EM, o modelo de regressão OLS, o algoritmo MPALS e um método de regressão PLS. Como métodos de imputação múltipla, adoptou-se um método baseado sobre o modelo de regressão OLS associado a técnicas bayesianas; propôs-se também um novo método de imputação múltipla baseado sobre os métodos de regressão PLS. Como métodos de classificação hierárquica utilizaram-se classificações clássicas e probabilísticas, estas últimas baseadas na família de métodos VL (validade da ligação). Os métodos de classificação hierárquica utilizados foram, "single", "complete" e "average" "linkage", AVL e AYB. Para as matrizes de semelhança utilizou-se o coeficiente de afinidade básico (para dados contínuos) - que corresponde ao índice d'Ochiai para dados binários; o coeficiente de correlação de Pearson e a aproximação probabilística do coeficiente de afinidade centrado e reduzido pelo método-W. O estudo foi baseado em dados simulados e reais. Utilizou-se o coeficiente de Spearman, para comparar as estruturas de classificação hierárquicas e para as classificações não hierárquicas o índice de Rand.
Le but de ce travail est d'étudier l’effet des données manquantes en classification de variables, principalement en classification hiérarchique ascendante, et aussi en classification non hiérarchique (ou partitionnement). L'étude est effectuée en considérant les facteurs suivants: pourcentage de données manquantes, méthodes d'imputation, coefficients de ressemblance et critères de classification. On suppose que les données manquantes sont du type MAR («missing at random») données manquantes au hasard, mais pas. complètement au hasard.. Les données manquantes satisfont un schéma majoritairement monotone. Nous avons utilisé comme techniques sans imputation les méthodes lisîwise et pairwise et comme méthodes d'imputation simple: l'algorithme EM, le modèle de régression OLS, l’algorithme NIPALS et une méthode de régression PLS., Comme méthodes d'imputation multiple nous avons adopté une méthode basée sur le modèle de régression OLS associé à des techniques bayesiennes; on a aussi proposé un nouveau modèle d'imputation multiple basé sur les méthodes de régression PLS. Pour combiner les structures de classification résultant des méthodes d'imputation multiple nous avons proposé une combinaison par la moyenne des matrices de similarité et deux méthodes de consensus. Nous avons utilisé comme méthodes de classification hiérarchique des méthodes classiques et probabilistes, ces dernières basées sur la famille de méthodes VL (Vraisemblance du Lien), comme méthodes de classification hiérarchique, le saut minimal, le saut maximal, la moyenne parmi les groupes et aussi les AVL et AVB; pour les matrices de ressemblance, le coefficient d'affinité basique (pour les données continues) - qui correspond à l'indice d'Ochiai; pour les données binaires, le coefficient de corrélation de Bravais-Pearson et l'approximation probabiliste du coefficient d'affinité centré et réduit par la méthode-W. L'étude est basée principalement sur des données simulées et complétée par des applications à des données réelles. Nous avons travaillé sur des données continues et binaires. Le coefficient de Spearman est utilisé pour comparer les structures hiérarchiques obtenues sur des matrices complètes avec les structures obtenues à partir des matrices ; où les données sont «effacées» puis imputées. L'indice de Rand est utilisé pour comparer les structures non hiérarchiques. Enfin, nous avons aussi proposé une méthode non hiérarchique qui «s'adapte» aux données manquantes. Sur un cas réel la méthode de Ward est utilisée dans les mêmes conditions que pour les simulations; mais aussi sans satisfaire un schéma monotone; une méthode de Monte Carlo par chaînes de Markov sert pour l'imputation multiple.
In this work we aimed to study the effect of missing data in classification of variables; mainly in ascending hierarchical classification, according to the following factors: amount of missing data, imputation techniques, similarity coefficient and classification-criterion. We used as techniques in presence of missing data, listwise and pairwise; as simple imputation methods, an EM algorithm, the OLS regression method, the NIPALS algorithm and a PLS regression method. As multiple imputation, we used a method based on the OLS regression and a new one based on PLS, combined by the mean value of the similarity matrices and an ordinal consensus. As hierarchical methods we used classical and. probabilistic approaches, the latter based on the VL-family. The hierarchical methods used were single, complete and average linkage, AVL and AVB. For the similarity matrices we used the basic affinity coefficient (for continuous data) - that corresponds to the Ochiai index for binary data; the Pearson's correlation coefficient and the probabilistic approach of the affinity coefficient, centered and reduced by the W-method.. The study was based mainly on simulated data, complemented by real ones. We used the Spearman.coefficient between the associated ultrametrics to compare the structures of the hierarchical classifications and, for the non hierarchical classifications, the Rand's index.
Rioult, François. "Extraction de connaissances dans les bases de donn'ees comportant des valeurs manquantes ou un grand nombre d'attributs." Phd thesis, Université de Caen, 2005. http://tel.archives-ouvertes.fr/tel-00252089.
Rioult, François. "Extraction de connaissances dans les bases de données comportant des valeurs manquantes ou un grand nombre d'attributs." Caen, 2005. http://www.theses.fr/2005CAEN2035.
Héraud, Bousquet Vanina. "Traitement des données manquantes en épidémiologie : application de l’imputation multiple à des données de surveillance et d’enquêtes." Thesis, Paris 11, 2012. http://www.theses.fr/2012PA11T017/document.
The management of missing values is a common and widespread problem in epidemiology. The most common technique used restricts the data analysis to subjects with complete information on variables of interest, which can reducesubstantially statistical power and precision and may also result in biased estimates.This thesis investigates the application of multiple imputation methods to manage missing values in epidemiological studies and surveillance systems for infectious diseases. Study designs to which multiple imputation was applied were diverse: a risk analysis of HIV transmission through blood transfusion, a case-control study on risk factors for ampylobacter infection, and a capture-recapture study to estimate the number of new HIV diagnoses among children. We then performed multiple imputation analysis on data of a surveillance system for chronic hepatitis C (HCV) to assess risk factors of severe liver disease among HCV infected patients who reported drug use. Within this study on HCV, we proposedguidelines to apply a sensitivity analysis in order to test the multiple imputation underlying hypotheses. Finally, we describe how we elaborated and applied an ongoing multiple imputation process of the French national HIV surveillance database, evaluated and attempted to validate multiple imputation procedures.Based on these practical applications, we worked out a strategy to handle missing data in surveillance data base, including the thorough examination of the incomplete database, the building of the imputation model, and the procedure to validate imputation models and examine underlying multiple imputation hypotheses
Croiseau, Pascal. "Influence et traitement des données manquantes dans les études d'association sur trios : application à des données sur la sclérose en plaques." Paris 11, 2008. http://www.theses.fr/2008PA112021.
To test for association between a set of markers and a disease, or to estimate the disease risks, different methods have been developped. Several of these methods need that all individuals are genotyped for all markers. When it is not the case, individuals with missing data are discarded. We have shown that this solution, which leads to a strong decrease of the sample size, could involve a loss of power to detect an association but also to false conclusion. In this work, we adapted to genetic data a method of "multiple imputation" that consists in replacing missing data by plausible values. Results obtained from simulated data show that this approach is promising to search for disease susceptibility genes. It is simple to use and very flexible in terms of genetic models that can be tested. We applied our method to a sample of 450 multiple sclerosis family trios (an affected child and both parents). Recent works have detected an association between a polymorphism of CTLA4 gene and multiple sclerosis. However, CTLA4 belongs to a cluster of three gene CD28, CTLA4 and ICOS all involved in the immune response. Consequently, this association could be due to another marker in linkage disequilibrium with CTLA4. Our method allows us to detect the association with CTLA4's polymorphism and also to provide us with a new candidate to explore : a CD28 polymorphism which could be involved in multiple sclerosis in interaction with the CTLA4 polymorphism
Rousseau, Michel. "L'impact des méthodes de traitement des valeurs manquantes sur les qualités phychométriques d'échelles de mesure de type Likert." Thesis, Université Laval, 2006. http://www.theses.ulaval.ca/2006/23426/23426.pdf.
The presence of missing answers for some items of a scale of measurement is a phenomenon which any researcher is suitable to meet during his work. Although bias that an inadequate treatment of this non-response can cause are known since nearly 30 years (Rubin, 1976), knowledge of the effectiveness of the various missing values treatment is still very restricted. The present study aims at making knowledge and practices concerning the treatment of the missing values evolve in the context of Likert type scale. The fundamental problem that missing values pose is that it is impossible not to take it into account at the time of the application of a method of statistical analysis, the majority of these methods having been developed to treat matrices of complete data. The models of measurement used in the analysis of Likert type scale data do not escape from this reality. Two models of measurement are studied more in-depth in this project, the classical test model and the Samejima graded model. The main objective of the research undertaken is to evaluate the effectiveness of five missing values treatment, including the multiple imputation method. Moreover, it was aimed to evaluate the impact of the number of subjects, the number of items and the proportion of the missing values on the effectiveness of the methods. The results of this research seem to suggest that the effectiveness of multiple imputation is higher than the other methods, although depending on the model of measurement considered, other simpler methods seem also effective. In conclusion, it is important to note that because no method of treatment can eliminate completely the bias caused by the presence of missing values, it would be preferable to prevent rather than to cure.
Rousseau, Michel. "L'impact des méthodes de traitement des valeurs manquantes sur les qualités psychométriques d'échelles de mesure de type Likert." Doctoral thesis, Université Laval, 2006. http://hdl.handle.net/20.500.11794/18669.
The presence of missing answers for some items of a scale of measurement is a phenomenon which any researcher is suitable to meet during his work. Although bias that an inadequate treatment of this non-response can cause are known since nearly 30 years (Rubin, 1976), knowledge of the effectiveness of the various missing values treatment is still very restricted. The present study aims at making knowledge and practices concerning the treatment of the missing values evolve in the context of Likert type scale. The fundamental problem that missing values pose is that it is impossible not to take it into account at the time of the application of a method of statistical analysis, the majority of these methods having been developed to treat matrices of complete data. The models of measurement used in the analysis of Likert type scale data do not escape from this reality. Two models of measurement are studied more in-depth in this project, the classical test model and the Samejima graded model. The main objective of the research undertaken is to evaluate the effectiveness of five missing values treatment, including the multiple imputation method. Moreover, it was aimed to evaluate the impact of the number of subjects, the number of items and the proportion of the missing values on the effectiveness of the methods. The results of this research seem to suggest that the effectiveness of multiple imputation is higher than the other methods, although depending on the model of measurement considered, other simpler methods seem also effective. In conclusion, it is important to note that because no method of treatment can eliminate completely the bias caused by the presence of missing values, it would be preferable to prevent rather than to cure.
De, Moliner Anne. "Estimation robuste de courbes de consommmation électrique moyennes par sondage pour de petits domaines en présence de valeurs manquantes." Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCK021/document.
In this thesis, we address the problem of robust estimation of mean or total electricity consumption curves by sampling in a finite population for the entire population and for small areas. We are also interested in estimating mean curves by sampling in presence of partially missing trajectories.Indeed, many studies carried out in the French electricity company EDF, for marketing or power grid management purposes, are based on the analysis of mean or total electricity consumption curves at a fine time scale, for different groups of clients sharing some common characteristics.Because of privacy issues and financial costs, it is not possible to measure the electricity consumption curve of each customer so these mean curves are estimated using samples. In this thesis, we extend the work of Lardin (2012) on mean curve estimation by sampling by focusing on specific aspects of this problem such as robustness to influential units, small area estimation and estimation in presence of partially or totally unobserved curves.In order to build robust estimators of mean curves we adapt the unified approach to robust estimation in finite population proposed by Beaumont et al (2013) to the context of functional data. To that purpose we propose three approaches : application of the usual method for real variables on discretised curves, projection on Functional Spherical Principal Components or on a Wavelets basis and thirdly functional truncation of conditional biases based on the notion of depth.These methods are tested and compared to each other on real datasets and Mean Squared Error estimators are also proposed.Secondly we address the problem of small area estimation for functional means or totals. We introduce three methods: unit level linear mixed model applied on the scores of functional principal components analysis or on wavelets coefficients, functional regression and aggregation of individual curves predictions by functional regression trees or functional random forests. Robust versions of these estimators are then proposed by following the approach to robust estimation based on conditional biais presented before.Finally, we suggest four estimators of mean curves by sampling in presence of partially or totally unobserved trajectories. The first estimator is a reweighting estimator where the weights are determined using a temporal non parametric kernel smoothing adapted to the context of finite population and missing data and the other ones rely on imputation of missing data. Missing parts of the curves are determined either by using the smoothing estimator presented before, or by nearest neighbours imputation adapted to functional data or by a variant of linear interpolation which takes into account the mean trajectory of the entire sample. Variance approximations are proposed for each method and all the estimators are compared to each other on real datasets for various missing data scenarios
Mohd, Salleh Mohd Najib. "Construction d'arbres de décision avec valeurs incomplètes pour la sélection de graines de palmier à huile." La Rochelle, 2008. http://www.theses.fr/2008LAROS240.
A missing value in incomplete information always inherent the accuracy of classification tasks when a decision tree is used to classify unseen cases. There will be cases where plausible values are required to retain towards more principled and less intrusive. In order to handle the attribute with missing values, the researcher generalizes decision algorithms that provide simpler and more understandable models to optimally fulfill human expert requirement and constraint. Our objective is to partition data by taking full advantage of the information with the presence of missing values ; but with supporting global information to achieve better performance. The contributions of this study are newly developed algorithms and analyses for planting material classification. The researcher reports the empirical results that may provide high returnin planting material breeders in oil palm industry through effective policies design and decision making
Héraud, Bousquet Vanina. "Traitement des données manquantes en épidémiologie : Application de l'imputation multiple à des données de surveillance et d'enquêtes." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00713926.
Rousseeuw, Kévin. "Modélisation de signaux temporels hautes fréquences multicapteurs à valeurs manquantes : Application à la prédiction des efflorescences phytoplanctoniques dans les rivières et les écosystèmes marins côtiers." Thesis, Littoral, 2014. http://www.theses.fr/2014DUNK0374/document.
Because of the growing interest for environmental issues and to identify direct and indirect effects of anthropogenic activities on ecosystems, environmental monitoring programs have recourse more and more frequently to high resolution, autonomous and multi-sensor instrumented stations. These systems are implemented in harsh environment and there is a need to stop measurements for calibration, service purposes or just because of sensors failure. Consequently, data could be noisy, missing or out of range and required some pre-processing or filtering steps to complete and validate raw data before any further investigations. In this context, the objective of this work is to design an automatic numeric system able to manage such amount of data in order to further knowledge on water quality and more precisely with consideration about phytoplankton determinism and dynamics. Main phase is the methodological development of phytoplankton bloom forecasting models giving the opportunity to end-user to handle well-adapted protocols. We propose to use hybrid Hidden Markov Model to detect and forecast environment states (identification of the main phytoplankton bloom steps and associated hydrological conditions). The added-value of our approach is to hybrid our model with a spectral clustering algorithm. Thus all HMM parameters (states, characterisation and dynamics of these states) are built by unsupervised learning. This approach was applied on three data bases: first one from the marine instrumented station MAREL Carnot (Ifremer) (2005-2009), second one from a Ferry Box system implemented in the eastern English Channel en 2012 and third one from a freshwater fixed station in the river Deûle in 2009 (Artois Picardie Water Agency). These works fall within the scope of a collaboration between IFREMER, LISIC/ULCO and Artois Picardie Water Agency in order to develop optimised systems to study effects of anthropogenic activities on aquatic systems functioning in a regional context of massive blooms of the harmful algae, Phaeocystis globosa
Geronimi, Julia. "Contribution à la sélection de variables en présence de données longitudinales : application à des biomarqueurs issus d'imagerie médicale." Thesis, Paris, CNAM, 2016. http://www.theses.fr/2016CNAM1114/document.
Clinical studies enable us to measure many longitudinales variables. When our goal is to find a link between a response and some covariates, one can use regularisation methods, such as LASSO which have been extended to Generalized Estimating Equations (GEE). They allow us to select a subgroup of variables of interest taking into account intra-patient correlations. Databases often have unfilled data and measurement problems resulting in inevitable missing data. The objective of this thesis is to integrate missing data for variable selection in the presence of longitudinal data. We use mutiple imputation and introduce a new imputation function for the specific case of variables under detection limit. We provide a new variable selection method for correlated data that integrate missing data : the Multiple Imputation Penalized Generalized Estimating Equations (MI-PGEE). Our operator applies the group-LASSO penalty on the group of estimated regression coefficients of the same variable across multiply-imputed datasets. Our method provides a consistent selection across multiply-imputed datasets, where the optimal shrinkage parameter is chosen by minimizing a BIC-like criteria. We then present an application on knee osteoarthritis aiming to select the subset of biomarkers that best explain the differences in joint space width over time
Hawarah, Lamis. "Une approche probabiliste pour le classement d'objets incomplètement connus dans un arbre de décision." Phd thesis, Grenoble 1, 2008. http://www.theses.fr/2008GRE10164.
We describe in this thesis an approach to fill missing values in decision trees during the classification phase. This approach is derived from the it ordered attribute trees (OAT) method, proposed by Lobo and Numao in 2000, which builds a decision tree for each attribute and uses these trees to fill the missing attribute values. It is based on the Mutual Information between the attributes and the class. Our approach extends this method by taking the dependence between the attributes into account when constructing the attributes trees, and provides a probability distribution as a result when classifying an incomplete object (instead of the most probable class). We present our approach and we test it on some real databases. We also compare our results with those given by the C4. 5 method and OAT. We also propose a k-nearest neighbours algorithm which calculates for each object from the test data its frequency in the learning data. We compare these frequencies with the classification results given by our approach, C4. 5 and OAT. Finally, we calculate the complexity of constructing the attribute trees and the complexity of classifying a new instance with missing values using our classification algorithm, C4. 5 and OAT
Hawarah, Lamis. "Une approche probabiliste pour le classement d'objets incomplètement connus dans un arbre de décision." Phd thesis, Université Joseph Fourier (Grenoble), 2008. http://tel.archives-ouvertes.fr/tel-00335313.
Nous expliquons notre méthode et nous la testons sur des bases de données réelles. Nous comparons nos résultats avec ceux donnés par la méthode C4.5 et AAO.
Nous proposons également un algorithme basé sur la méthode des k plus proches voisins qui calcule pour chaque objet de la base de test sa fréquence dans la base d'apprentissage. Nous comparons ces fréquences avec les résultats de classement données par notre approche, C4.5 et AAO. Finalement, nous calculons la complexité de construction des arbres d'attributs ainsi que la complexité de classement d'un objet incomplet en utilisant notre approche, C4.5 et AAO.
Marti, soler Helena. "Modélisation des données d'enquêtes cas-cohorte par imputation multiple : Application en épidémiologie cardio-vasculaire." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00779739.
Geronimi, Julia. "Contribution à la sélection de variables en présence de données longitudinales : application à des biomarqueurs issus d'imagerie médicale." Electronic Thesis or Diss., Paris, CNAM, 2016. http://www.theses.fr/2016CNAM1114.
Clinical studies enable us to measure many longitudinales variables. When our goal is to find a link between a response and some covariates, one can use regularisation methods, such as LASSO which have been extended to Generalized Estimating Equations (GEE). They allow us to select a subgroup of variables of interest taking into account intra-patient correlations. Databases often have unfilled data and measurement problems resulting in inevitable missing data. The objective of this thesis is to integrate missing data for variable selection in the presence of longitudinal data. We use mutiple imputation and introduce a new imputation function for the specific case of variables under detection limit. We provide a new variable selection method for correlated data that integrate missing data : the Multiple Imputation Penalized Generalized Estimating Equations (MI-PGEE). Our operator applies the group-LASSO penalty on the group of estimated regression coefficients of the same variable across multiply-imputed datasets. Our method provides a consistent selection across multiply-imputed datasets, where the optimal shrinkage parameter is chosen by minimizing a BIC-like criteria. We then present an application on knee osteoarthritis aiming to select the subset of biomarkers that best explain the differences in joint space width over time
Phan, Thi-Thu-Hong. "Elastic matching for classification and modelisation of incomplete time series." Thesis, Littoral, 2018. http://www.theses.fr/2018DUNK0483/document.
Missing data are a prevalent problem in many domains of pattern recognition and signal processing. Most of the existing techniques in the literature suffer from one major drawback, which is their inability to process incomplete datasets. Missing data produce a loss of information and thus yield inaccurate data interpretation, biased results or unreliable analysis, especially for large missing sub-sequence(s). So, this thesis focuses on dealing with large consecutive missing values in univariate and low/un-correlated multivariate time series. We begin by investigating an imputation method to overcome these issues in univariate time series. This approach is based on the combination of shape-feature extraction algorithm and Dynamic Time Warping method. A new R-package, namely DTWBI, is then developed. In the following work, the DTWBI approach is extended to complete large successive missing data in low/un-correlated multivariate time series (called DTWUMI) and a DTWUMI R-package is also established. The key of these two proposed methods is that using the elastic matching to retrieving similar values in the series before and/or after the missing values. This optimizes as much as possible the dynamics and shape of knowledge data, and while applying the shape-feature extraction algorithm allows to reduce the computing time. Successively, we introduce a new method for filling large successive missing values in low/un-correlated multivariate time series, namely FSMUMI, which enables to manage a high level of uncertainty. In this way, we propose to use a novel fuzzy grades of basic similarity measures and fuzzy logic rules. Finally, we employ the DTWBI to (i) complete the MAREL Carnot dataset and then we perform a detection of rare/extreme events in this database (ii) forecast various meteorological univariate time series collected in Vietnam
Mehanna, Souheir. "Data quality issues in mobile crowdsensing environments." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG053.
Mobile crowdsensing has emerged as a powerful paradigm for harnessing the collective sensing capabilities of mobile devices to gather diverse data in real-world settings. However, ensuring the quality of the collected data in mobile crowdsensing environments (MCS) remains a challenge because low-cost nomadic sensors can be prone to malfunctions, faults, and points of failure. The quality of the collected data can significantly impact the results of the subsequent analyses. Therefore, monitoring the quality of sensor data is crucial for effective analytics.In this thesis, we have addressed some of the issues related to data quality in mobile crowdsensing environments. First, we have explored issues related to data completeness. The mobile crowdsensing context has specific characteristics that are not all captured by the existing factors and metrics. We have proposed a set of quality factors of data completeness suitable for mobile crowdsensing environments. We have also proposed a set of metrics to evaluate each of these factors. In order to improve data completeness, we have tackled the problem of generating missing values.Existing data imputation techniques generate missing values by relying on existing measurements without considering the disparate quality levels of these measurements. We propose a quality-aware data imputation approach that extends existing data imputation techniques by taking into account the quality of the measurements.In the second part of our work, we have focused on anomaly detection, which is another major problem that sensor data face. Existing anomaly detection approaches use available data measurements to detect anomalies, and are oblivious of the quality of the measurements. In order to improve the detection of anomalies, we propose an approach relying on clustering algorithms that detects pattern anomalies while integrating the quality of the sensor into the algorithm.Finally, we have studied the way data quality could be taken into account for analyzing sensor data. We have proposed some contributions which are the first step towards quality-aware sensor data analytics, which consist of quality-aware aggregation operators, and an approach that evaluates the quality of a given aggregate considering the data used in its computation
Delavallade, Thomas. "Evaluation des risques de crise, appliquée à la détection des conflits armés intra-étatiques." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2007. http://tel.archives-ouvertes.fr/tel-00230663.
Si de nombreuses cellules de veille ont été mises en place, tant au niveau de l'entreprise, qu'au niveau des institutions nationales et internationales, la quantité d'information potentiellement pertinente pour un sujet donné est parfois telle que la mise à disposition d'outils automatisant tout ou partie du traitement de cette information répond à un besoin réel, sinon à une nécessité.
Dans cette optique, dans cette thèse, nous proposons un système générique d'aide à l'anticipation de crises. Notre objectif est de fournir une synthèse d'une situation donnée, d'un point de vue structurel et non événementiel, via l'identification des crises potentielles ainsi que des principaux facteurs de risque associés. Le système que nous proposons repose sur l'apprentissage supervisé de règles de décision floues.
La qualité des données d'apprentissage étant problématique dans de nombreuses applications, nous proposons, dans nos travaux, une étude approfondie sur la chaîne de prétraitement, et en particulier sur le traitement des valeurs manquantes et sur la sélection d'attributs. Nous avons également mis l'accent sur l'évaluation et la sélection de modèles afin de pouvoir adapter les modèles de détection au problème à traiter, ainsi qu'aux besoins de l'utilisateur final.
La synthèse des résultats fournis par notre système étant destiné à des utilisateurs en charge de la veille stratégique, des outils d'aide au raisonnement et à la compréhension de cette synthèse sont également proposés.
Pour juger de l'intérêt de notre méthodologie nous détaillons son application à un problème concret : la détection des conflits armés intra-étatiques.
Revillon, Guillaume. "Uncertainty in radar emitter classification and clustering." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS098/document.
In Electronic Warfare, radar signals identification is a supreme asset for decision making in military tactical situations. By providing information about the presence of threats, classification and clustering of radar signals have a significant role ensuring that countermeasures against enemies are well-chosen and enabling detection of unknown radar signals to update databases. Most of the time, Electronic Support Measures systems receive mixtures of signals from different radar emitters in the electromagnetic environment. Hence a radar signal, described by a pulse-to-pulse modulation pattern, is often partially observed due to missing measurements and measurement errors. The identification process relies on statistical analysis of basic measurable parameters of a radar signal which constitute both quantitative and qualitative data. Many general and practical approaches based on data fusion and machine learning have been developed and traditionally proceed to feature extraction, dimensionality reduction and classification or clustering. However, these algorithms cannot handle missing data and imputation methods are required to generate data to use them. Hence, the main objective of this work is to define a classification/clustering framework that handles both outliers and missing values for any types of data. Here, an approach based on mixture models is developed since mixture models provide a mathematically based, flexible and meaningful framework for the wide variety of classification and clustering requirements. The proposed approach focuses on the introduction of latent variables that give us the possibility to handle sensitivity of the model to outliers and to allow a less restrictive modelling of missing data. A Bayesian treatment is adopted for model learning, supervised classification and clustering and inference is processed through a variational Bayesian approximation since the joint posterior distribution of latent variables and parameters is untractable. Some numerical experiments on synthetic and real data show that the proposed method provides more accurate results than standard algorithms
Faucheux, Lilith. "Learning from incomplete biomedical data : guiding the partition toward prognostic information." Electronic Thesis or Diss., Université Paris Cité, 2021. http://www.theses.fr/2021UNIP5242.
The topic of this thesis is partition learning analyses in the context of incomplete data. Two methodological development are presented, with two medical and biomedical applications. The first methodological development concerns the implementation of unsupervised partition learning in the presence of incomplete data. Two types of incomplete data were considered: missing data and left-censored data (that is, values “lower than some detection threshold"), and handled through multiple imputation (MI) framework. Multivariate imputation by chained equation (MICE) was used to perform tailored imputations for each type of incomplete data. Then, for each imputed dataset, unsupervised learning was performed, with a data-based selected number of clusters. Last, a consensus clustering algorithm was used to pool the partitions, as an alternative to Rubin's rules. The second methodological development concerns the implementation of semisupervised partition learning in an incomplete dataset, to combine data structure and patient survival. This aimed at identifying patient profiles that relate both to differences in the group structure extracted from the data, and in the patients' prognosis. The supervised (prognostic value) and unsupervised (group structure) objectives were combined through Pareto multi-objective optimization. Missing data were handled, as above, through MI, with Rubin's rules used to combine the supervised and unsupervised objectives across the imputations, and the optimal partitions pooled using consensus clustering. Two applications are provided, one on the immunological landscape of the breast tumor microenvironment and another on the COVID-19 infection in the context of a hematological disease
Merlin, Paul. "Des techniques neuronales dans l'alternatif." Phd thesis, Université Panthéon-Sorbonne - Paris I, 2009. http://tel.archives-ouvertes.fr/tel-00450649.
Chagra, Djamila. "Sélection de modèle d'imputation à partir de modèles bayésiens hiérarchiques linéaires multivariés." Thèse, 2009. http://hdl.handle.net/1866/3936.
Abstract The technique known as multiple imputation seems to be the most suitable technique for solving the problem of non-response. The literature mentions methods that models the nature and structure of missing values. One of the most popular methods is the PAN algorithm of Schafer and Yucel (2002). The imputations yielded by this method are based on a multivariate linear mixed-effects model for the response variable. A Bayesian hierarchical clustered and more flexible extension of PAN is given by the BHLC model of Murua et al. (2005). The main goal of this work is to study the problem of model selection for multiple imputation in terms of efficiency and accuracy of missing-value predictions. We propose a measure of performance linked to the prediction of missing values. The measure is a mean squared error, and hence in addition to the variance associated to the multiple imputations, it includes a measure of bias in the prediction. We show that this measure is more objective than the most common variance measure of Rubin. Our measure is computed by incrementing by a small proportion the number of missing values in the data and supposing that those values are also missing. The performance of the imputation model is then assessed through the prediction error associated to these pseudo missing values. In order to study the problem objectively, we have devised several simulations. Data were generated according to different explicit models that assumed particular error structures. Several missing-value prior distributions as well as error-term distributions are then hypothesized. Our study investigates if the true error structure of the data has an effect on the performance of the different hypothesized choices for the imputation model. We concluded that the answer is yes. Moreover, the choice of missing-value prior distribution seems to be the most important factor for accuracy of predictions. In general, the most effective choices for good imputations are a t-Student distribution with different cluster variances for the error-term, and a missing-value Normal prior with data-driven mean and variance, or a missing-value regularizing Normal prior with large variance (a ridge-regression-like prior). Finally, we have applied our ideas to a real problem dealing with health outcome observations associated to a large number of countries around the world. Keywords: Missing values, multiple imputation, Bayesian hierarchical linear model, mixed effects model.
Les logiciels utilisés sont Splus et R.
Paquin, Stéphane. "Comparaison de quatre méthodes pour le traitement des données manquantes au sein d’un modèle multiniveau paramétrique visant l’estimation de l’effet d’une intervention." Thèse, 2010. http://hdl.handle.net/1866/4599.
Missing data are common in empirical research and can lead to significant errors in parameters’ estimation. This dissertation in the field of methodological sociology addresses the influence of missing data on the estimation of the impact of a prevention program. The first two sections outline the potential bias caused by missing data and present the theoretical background to describe them. The third section focuses on methods for handling missing data, conventional methods are exposed as well as three recent ones. The fourth section contains a description of the Montreal Longitudinal Experimental Study (MLES) and of the data used. The fifth section presents the analysis performed, it contains: the method for analysing the effect of an intervention from longitudinal data, a detailed description of the missing data of MLES and a diagnosis of patterns and mechanisms. The sixth section contains the results of estimating the effect of the program under different assumptions about the mechanism of missing data and by four methods: complete case analysis, maximum likelihood, weighting and multiple imputation. They indicate (I) that the assumption on the type of MAR mechanism seems to affect the estimate of the program’s impact and, (II) that the estimates obtained using different estimation methods leads to similar conclusions about the intervention’s effect.
Dongmo, Jiongo Valéry. "Inférence robuste à la présence des valeurs aberrantes dans les enquêtes." Thèse, 2015. http://hdl.handle.net/1866/13720.
This thesis focuses on the treatment of representative outliers in two important aspects of surveys: small area estimation and imputation for item non-response. Concerning small area estimation, robust estimators in unit-level models have been studied. Sinha & Rao (2009) proposed estimation procedures designed for small area means, based on robustified maximum likelihood parameters estimates of linear mixed model and robust empirical best linear unbiased predictors of the random effect of the underlying model. Their robust methods for estimating area means are of the plug-in type, and in view of the results of Chambers (1986), the resulting robust estimators may be biased in some situations. Biascorrected estimators have been proposed by Chambers et al. (2014). In addition, these robust small area estimators were associated with the estimation of the Mean Square Error (MSE). Sinha & Rao (2009) proposed a parametric bootstrap procedure based on the robust estimates of the parameters of the underlying linear mixed model to estimate the MSE. Analytical procedures for the estimation of the MSE have been proposed in Chambers et al. (2014). However, their theoretical validity has not been formally established and their empirical performances are not fully satisfactorily. Here, we investigate two new approaches for the robust version the best empirical unbiased estimator: the first one relies on the work of Chambers (1986), while the second proposal uses the concept of conditional bias as an influence measure to assess the impact of units in the population. These two classes of robust small area estimators also include a correction term for the bias. However, they are both fully bias-corrected, in the sense that the correction term takes into account the potential impact of the other domains on the small area of interest unlike the one of Chambers et al. (2014) which focuses only on the domain of interest. Under certain conditions, non-negligible bias is expected for the Sinha-Rao method, while the proposed methods exhibit significant bias reduction, controlled by appropriate choices of the influence function and tuning constants. Monte Carlo simulations are conducted, and comparisons are made between: the new robust estimators, the Sinha-Rao estimator, and the bias-corrected estimator. Empirical results suggest that the Sinha-Rao method and the bias-adjusted estimator of Chambers et al (2014) may exhibit a large bias, while the new procedures offer often better performances in terms of bias and mean squared error. In addition, we propose a new bootstrap procedure for MSE estimation of robust small area predictors. Unlike existing approaches, we formally prove the asymptotic validity of the proposed bootstrap method. Moreover, the proposed method is semi-parametric, i.e., it does not rely on specific distributional assumptions about the errors and random effects of the unit-level model underlying the small-area estimation, thus it is particularly attractive and more widely applicable. We assess the finite sample performance of our bootstrap estimator through Monte Carlo simulations. The results show that our procedure performs satisfactorily well and outperforms existing ones. Application of the proposed method is illustrated by analyzing a well-known outlier-contaminated small county crops area data from North-Central Iowa farms and Landsat satellite images. Concerning imputation in the presence of item non-response, some single imputation methods have been studied. The deterministic regression imputation, which includes the ratio imputation and mean imputation are often used in surveys. These imputation methods may lead to biased imputed estimators if the imputation model or the non-response model is not properly specified. Recently, doubly robust imputed estimators have been developed. However, in the presence of outliers, the doubly robust imputed estimators can be very unstable. Using the concept of conditional bias as a measure of influence (Beaumont, Haziza and Ruiz-Gazen, 2013), we propose an outlier robust version of the doubly robust imputed estimator. Thus this estimator is denoted as a triple robust imputed estimator. The results of simulation studies show that the proposed estimator performs satisfactorily well for an appropriate choice of the tuning constant.