Tesis sobre el tema "Analyse statistique multiple"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 43 mejores tesis para su investigación sobre el tema "Analyse statistique multiple".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Zhang, Jian. "Bayesian multiple hypotheses testing with quadratic criterion". Thesis, Troyes, 2014. http://www.theses.fr/2014TROY0016/document.
Texto completoThe anomaly detection and localization problem can be treated as a multiple hypotheses testing (MHT) problem in the Bayesian framework. The Bayesian test with the 0−1 loss function is a standard solution for this problem, but the alternative hypotheses have quite different importance in practice. The 0−1 loss function does not reflect this fact while the quadratic loss function is more appropriate. The objective of the thesis is the design of a Bayesian test with the quadratic loss function and its asymptotic study. The construction of the test is made in two steps. In the first step, a Bayesian test with the quadratic loss function for the MHT problem without the null hypothesis is designed and the lower and upper bounds of the misclassification probabilities are calculated. The second step constructs a Bayesian test for the MHT problem with the null hypothesis. The lower and upper bounds of the false alarm probabilities, the missed detection probabilities as well as the misclassification probabilities are calculated. From these bounds, the asymptotic equivalence between the proposed test and the standard one with the 0-1 loss function is studied. A lot of simulation and an acoustic experiment have illustrated the effectiveness of the new statistical test
Abdessemed, Lila. "Intégration de la contigui͏̈té en analyse factorielle discriminante et généralisation de l'analyse factorielle multiple aux tableaux de fréquence". Rennes 1, 1994. http://www.theses.fr/1994REN10029.
Texto completoCollignon, Olivier. "Recherche statistique de biomarqueurs du cancer et de l'allergie à l'arachide". Phd thesis, Nancy 1, 2009. http://tel.archives-ouvertes.fr/tel-00430177.
Texto completoDupuy, Mariette. "Analyse des caractéristiques électriques pour la détection des sujets à risque de mort subite cardiaque". Electronic Thesis or Diss., Bordeaux, 2025. http://www.theses.fr/2025BORD0002.
Texto completoSudden cardiac death (SCD) accounts for 30% of adult mortality in industrialized countries. The majority of SCD cases are the result of an arrhythmia called ventricular fibrillation, which itself results from structural abnormalities in the heart muscle. Despite the existence of effective therapies, most individuals at risk for SCD are not identified preventively due to the lack of available testing. Developing specific markers on electrocardiographic recordings would enable the identification and stratification of SCD risk. Over the past six years, the Liryc Institute has recorded surface electrical signals from over 800 individuals (both healthy and pathological) using a high-resolution 128-electrode device. Features were calculated from these signals (signal duration per electrode, frequency, amplitude fractionation, etc.). In total, more than 1,500 electrical features are available per patient. During the acquisition process using the 128-electrode system in a hospital setting, noise or poor positioning of specific electrodes sometimes prevents calculating the intended features, leading to an incomplete database. This thesis is organized around two main axes. First, we developed a method for imputing missing data to address the problem of faulty electrodes. Then, we developed a risk score for the sudden death risk stratification. The most commonly used family of methods for handling missing data is imputation, ranging from simple completion by averaging to local aggregation methods, local regressions, optimal transport, or even modifications of generative models. Recently, Autoencoders (AE) and, more specifically, Denoising AutoEncoders (DAE) have performed well in this task. AEs are neural networks used to learn a representation of data in a reduced-dimensional space. DAEs are AEs that have been proposed to reconstruct original data from noisy data. In this work, we propose a new methodology based on DAEs called the modified Denoising AutoEncoder (mDAE) to allow for the imputation of missing data. The second research axis of the thesis focused on developing a risk score for sudden cardiac death. DAEs can model and reconstruct complex data. We trained DAEs to model the distribution of healthy individuals based on a selected subset of electrical features. Then, we used these DAEs to discriminate pathological patients from healthy individuals by analyzing the imputation quality of the DAE on partially masked features. We also compared different classification methods to establish a risk score for sudden death
Girka, Fabien. "Development of new statistical/ML methods for identifying multimodal factors related to the evolution of Multiple Sclerosis". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG075.
Texto completoStudying a given phenomenon under multiple views can reveal a more significant part of the mechanisms at stake rather than considering each view separately. In order to design a study under such a paradigm, measurements are usually acquired through different modalities resulting in multimodal/multiblock/multi-source data. One statistical framework suited explicitly for the joint analysis of such multi-source data is Regularized Generalized Canonical Correlation Analysis (RGCCA). RGCCA extracts canonical vectors and components that summarize the different views and their interactions. The contributions of this thesis are fourfold. (i) Improve and enrich the RGCCA R package to democratize its use. (ii) Extend the RGCCA framework to better handle tensor data by imposing a low-rank tensor factorization to the extracted canonical vectors. (iii) Propose and investigate simultaneous versions of RGCCA to get all canonical components at once. The proposed methods pave the way for new extensions of RGCCA. (iv) Use the developed tools and expertise to analyze multiple sclerosis and leukodystrophy data. A focus is made on identifying biomarkers differentiating between patients and healthy controls or between groups of patients
Kumar, Vandhna. "Descente d'échelle statistique du niveau de la mer pour les îles du Pacifique Sud-Ouest : une approche de régression linéaire multiple". Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30234.
Texto completoSea level rise is a growing concern in the islands of the western Pacific. Over the altimetry era (1993-present), sea level rise rates in the western tropical Pacific were amongst the highest recorded across the world ocean, reaching up to 3-4 times the global mean. As more and more affected communities relocate to higher grounds to escape the rising seas, there is a compelling need for information on local scales to ease the adaptation and planning process. This is not a straightforward process as sea level varies regionally, driven by wind and ocean circulation patterns, and the prevailing climate modes (e.g. ENSO, PDO/IPO). On local scales, substantial sea level changes can result from natural or anthropogenic induced vertical ground motion. Motivated by such concerns, this thesis focuses on developing a statistical downscaling technique, namely a multiple linear regression (MLR) model, to simulate island sea levels at selected sites in the southwest Pacific - Suva and Lautoka in Fiji, and Nouméa in New Caledonia. The model is based on the knowledge that sea level variations in the tropical Pacific are mainly thermosteric in nature (temperature-related changes in ocean water density) and that these thermosteric variations are dominated by wind-forced, westward propagating Rossby waves. The MLR experiments are conducted over the 1988-2014 study period, with a focus on interannual-to-decadal sea level variability and trend. Island sea levels are first expressed a sum of steric and mass changes. Then, a more dynamical approach using wind stress curl as a proxy for the thermosteric component is undertaken to construct the MLR model. In the latter case, island sea levels are perceived as a composite of global, regional and local components, where the second is dominant. The MLR model takes wind stress curl as the dominant regional regressor (via a Rossby wave model), and the local halosteric component (salinity-related changes in ocean water density), local wind stress, and local sea surface temperature as minor regressors. A stepwise regression function is used to isolate statistically significant regressors before calibrating the MLR model. The modeled sea level shows high agreement with observations, capturing 80% of the variance on average. Stationarity tests on the MLR model indicate that it can be applied skillfully to projections of future sea level. The statistical downscaling approach overall provides insights on key drivers of sea level variability at the selected sites, showing that while local dynamics and the global signal modulate sea level to a given extent, most of the variance is driven by regional factors. [...]
Jmel, Saïd. "Applications des modèles graphiques au choix de variables et à l'analyse des interactions dans une table de contingence multiple". Toulouse 3, 1992. http://www.theses.fr/1992TOU30091.
Texto completoPluntz, Matthieu. "Sélection de variables en grande dimension par le Lasso et tests statistiques - application à la pharmacovigilance". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASR002.
Texto completoVariable selection in high-dimensional regressions is a classic problem in health data analysis. It aims to identify a limited number of factors associated with a given health event among a large number of candidate variables such as genetic factors or environmental or drug exposures.The Lasso regression (Tibshirani, 1996) provides a series of sparse models where variables appear one after another depending on the regularization parameter's value. It requires a procedure for choosing this parameter and thus the associated model. In this thesis, we propose procedures for selecting one of the models of the Lasso path, which belong to or are inspired by the statistical testing paradigm. Thus, we aim to control the risk of selecting at least one false positive (Family-Wise Error Rate, FWER) unlike most existing post-processing methods of the Lasso, which accept false positives more easily.Our first proposal is a generalization of the Akaike Information Criterion (AIC) which we call the Extended AIC (EAIC). We penalize the log-likelihood of the model under consideration by its number of parameters weighted by a function of the total number of candidate variables and the targeted level of FWER but not the number of observations. We obtain this function by observing the relationship between comparing the information criteria of nested sub-models of a high-dimensional regression, and performing multiple likelihood ratio test, about which we prove an asymptotic property.Our second proposal is a test of the significance of a variable appearing on the Lasso path. Its null hypothesis depends on a set A of already selected variables and states that it contains all the active variables. As the test statistic, we aim to use the regularization parameter value from which a first variable outside A is selected by Lasso. This choice faces the fact that the null hypothesis is not specific enough to define the distribution of this statistic and thus its p-value. We solve this by replacing the statistic with its conditional p-value, which we define conditional on the non-penalized estimated coefficients of the model restricted to A. We estimate the conditional p-value with an algorithm that we call simulation-calibration, where we simulate outcome vectors and then calibrate them on the observed outcome‘s estimated coefficients. We adapt the calibration heuristically to the case of generalized linear models (binary and Poisson) in which it turns into an iterative and stochastic procedure. We prove that using our test controls the risk of selecting a false positive in linear models, both when the null hypothesis is verified and, under a correlation condition, when the set A does not contain all active variables.We evaluate the performance of both procedures through extensive simulation studies, which cover both the potential selection of a variable under the null hypothesis (or its equivalent for EAIC) and on the overall model selection procedure. We observe that our proposals compare well to their closest existing counterparts, the BIC and its extended versions for the EAIC, and Lockhart et al.'s (2014) covariance test for the simulation-calibration test. We also illustrate both procedures in the detection of exposures associated with drug-induced liver injuries (DILI) in the French national pharmacovigilance database (BNPV) by measuring their performance using the DILIrank reference set of known associations
Tran, Xuan Quang. "Les modèles de régression dynamique et leurs applications en analyse de survie et fiabilité". Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0147/document.
Texto completoThis thesis was designed to explore the dynamic regression models, assessing the sta-tistical inference for the survival and reliability data analysis. These dynamic regressionmodels that we have been considered including the parametric proportional hazards andaccelerated failure time models contain the possibly time-dependent covariates. We dis-cussed the following problems in this thesis.At first, we presented a generalized chi-squared test statisticsY2nthat is a convenient tofit the survival and reliability data analysis in presence of three cases: complete, censoredand censored with covariates. We described in detail the theory and the mechanism to usedofY2ntest statistic in the survival and reliability data analysis. Next, we considered theflexible parametric models, evaluating the statistical significance of them by usingY2nandlog-likelihood test statistics. These parametric models include the accelerated failure time(AFT) and a proportional hazards (PH) models based on the Hypertabastic distribution.These two models are proposed to investigate the distribution of the survival and reliabilitydata in comparison with some other parametric models. The simulation studies were de-signed, to demonstrate the asymptotically normally distributed of the maximum likelihood estimators of Hypertabastic’s parameter, to validate of the asymptotically property of Y2n test statistic for Hypertabastic distribution when the right censoring probability equal 0% and 20%.n the last chapter, we applied those two parametric models above to three scenes ofthe real-life data. The first one was done the data set given by Freireich et al. on thecomparison of two treatment groups with additional information about log white blood cellcount, to test the ability of a therapy to prolong the remission times of the acute leukemiapatients. It showed that Hypertabastic AFT model is an accurate model for this dataset.The second one was done on the brain tumour study with malignant glioma patients, givenby Sauerbrei & Schumacher. It showed that the best model is Hypertabastic PH onadding five significance covariates. The third application was done on the data set given by Semenova & Bitukov on the survival times of the multiple myeloma patients. We did not propose an exactly model for this dataset. Because of that was an existing oneintersection of survival times. We, therefore, suggest fitting other dynamic model as SimpleCross-Effect model for this dataset
Héraud, Bousquet Vanina. "Traitement des données manquantes en épidémiologie : application de l’imputation multiple à des données de surveillance et d’enquêtes". Thesis, Paris 11, 2012. http://www.theses.fr/2012PA11T017/document.
Texto completoThe management of missing values is a common and widespread problem in epidemiology. The most common technique used restricts the data analysis to subjects with complete information on variables of interest, which can reducesubstantially statistical power and precision and may also result in biased estimates.This thesis investigates the application of multiple imputation methods to manage missing values in epidemiological studies and surveillance systems for infectious diseases. Study designs to which multiple imputation was applied were diverse: a risk analysis of HIV transmission through blood transfusion, a case-control study on risk factors for ampylobacter infection, and a capture-recapture study to estimate the number of new HIV diagnoses among children. We then performed multiple imputation analysis on data of a surveillance system for chronic hepatitis C (HCV) to assess risk factors of severe liver disease among HCV infected patients who reported drug use. Within this study on HCV, we proposedguidelines to apply a sensitivity analysis in order to test the multiple imputation underlying hypotheses. Finally, we describe how we elaborated and applied an ongoing multiple imputation process of the French national HIV surveillance database, evaluated and attempted to validate multiple imputation procedures.Based on these practical applications, we worked out a strategy to handle missing data in surveillance data base, including the thorough examination of the incomplete database, the building of the imputation model, and the procedure to validate imputation models and examine underlying multiple imputation hypotheses
Blain, Alexandre. "Reliable statistical inference : controlling the false discovery proportion in high-dimensional multivariate estimators". Electronic Thesis or Diss., université Paris-Saclay, 2024. https://theses.hal.science/tel-04935172.
Texto completoStatistically controlled variable selection is a fundamental problem encountered in diverse fields where practitioners have to assess the importance of input variables with regards to an outcome of interest. In this context, statistical control aims at limiting the proportion of false discoveries, meaning the proportion of selected variables that are independent of the outcome of interest. In this thesis, we develop methods that aim at statistical control in high-dimensional settings while retaining statistical power. We present four key contributions in this avenue of work. First, we introduce Notip, a non-parametric method that allows users to obtain guarantees on the proportion of true discoveries in any brain region. This procedure improves detection sensitivity over existing methods while retaining false discoveries control. Second, we extend the Knockoff framework by proposing KOPI, a method that provides False Discovery Proportion (FDP) control in probability rather than in expectancy. KOPI is naturally compatible with aggregation of multiple Knockoffs draws, addressing the randomness of traditional Knockoff inference. Third, we develop a diagnostic tool to identify violations of the exchangeability assumption in Knockoffs, accompanied by a novel non-parametric Knockoff generation method that restores false discoveries control. Finally, we introduce CoJER to enhance conformal prediction by providing sharp control of the False Coverage Proportion (FCP) when multiple test points are considered, ensuring more reliable uncertainty estimates. CoJER can also be used to aggregate the confidence intervals provided by different predictive models, thus mitigating the impact of modeling choices. Together, these contributions advance the reliability of statistical inference in high-dimensional settings such as neuroimaging and genomic data
Wolley, Chirine. "Apprentissage supervisé à partir des multiples annotateurs incertains". Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4070/document.
Texto completoIn supervised learning tasks, obtaining the ground truth label for each instance of the training dataset can be difficult, time-consuming and/or expensive. With the advent of infrastructures such as the Internet, an increasing number of web services propose crowdsourcing as a way to collect a large enough set of labels from internet users. The use of these services provides an exceptional facility to collect labels from anonymous annotators, and thus, it considerably simplifies the process of building labels datasets. Nonetheless, the main drawback of crowdsourcing services is their lack of control over the annotators and their inability to verify and control the accuracy of the labels and the level of expertise for each labeler. Hence, managing the annotators' uncertainty is a clue for learning from imperfect annotations. This thesis provides three algorithms when learning from multiple uncertain annotators. IGNORE generates a classifier that predict the label of a new instance and evaluate the performance of each annotator according to their level of uncertainty. X-Ignore, considers that the performance of the annotators both depends on their uncertainty and on the quality of the initial dataset to be annotated. Finally, ExpertS deals with the problem of annotators' selection when generating the classifier. It identifies experts annotators, and learn the classifier based only on their labels. We conducted in this thesis a large set of experiments in order to evaluate our models, both using experimental and real world medical data. The results prove the performance and accuracy of our models compared to previous state of the art solutions in this context
Koulechova, Gozal Olga. "Analyse statistique des mesures multiples en application au traitement d'image". Bordeaux 1, 2000. http://www.theses.fr/2000BOR10536.
Texto completoBenghanem, Abdelghani. "Étude et optimisation de la qualité sonore d'un véhicule récréatif motorisé". Mémoire, Université de Sherbrooke, 2017. http://hdl.handle.net/11143/11573.
Texto completoPeyre, Julie. "Analyse statistique des données issues des biopuces à ADN". Phd thesis, Université Joseph Fourier (Grenoble), 2005. http://tel.archives-ouvertes.fr/tel-00012041.
Texto completoDans un premier chapitre, nous étudions le problème de la normalisation des données dont l'objectif est d'éliminer les variations parasites entre les échantillons des populations pour ne conserver que les variations expliquées par les phénomènes biologiques. Nous présentons plusieurs méthodes existantes pour lesquelles nous proposons des améliorations. Pour guider le choix d'une méthode de normalisation, une méthode de simulation de données de biopuces est mise au point.
Dans un deuxième chapitre, nous abordons le problème de la détection de gènes différentiellement exprimés entre deux séries d'expériences. On se ramène ici à un problème de test d'hypothèses multiples. Plusieurs approches sont envisagées : sélection de modèles et pénalisation, méthode FDR basée sur une décomposition en ondelettes des statistiques de test ou encore seuillage bayésien.
Dans le dernier chapitre, nous considérons les problèmes de classification supervisée pour les données de biopuces. Pour remédier au problème du "fléau de la dimension", nous avons développé une méthode semi-paramétrique de réduction de dimension, basée sur la maximisation d'un critère de vraisemblance locale dans les modèles linéaires généralisés en indice simple. L'étape de réduction de dimension est alors suivie d'une étape de régression par polynômes locaux pour effectuer la classification supervisée des individus considérés.
Shen, Kaikai. "Automatic segmentation and shape analysis of human hippocampus in Alzheimer's disease". Thesis, Dijon, 2011. http://www.theses.fr/2011DIJOS072/document.
Texto completoThe aim of this thesis is to investigate the shape change in hippocampus due to the atrophy in Alzheimer’s disease (AD). To this end, specific algorithms and methodologies were developed to segment the hippocampus from structural magnetic resonance (MR) images and model variations in its shape. We use a multi-atlas based segmentation propagation approach for the segmentation of hippocampus which has been shown to obtain accurate parcellation of brain structures. We developed a supervised method to build a population specific atlas database, by propagating the parcellations from a smaller generic atlas database. Well segmented images are inspected and added to the set of atlases, such that the segmentation capability of the atlas set may be enhanced. The population specific atlases are evaluated in terms of the agreement among the propagated labels when segmenting new cases. Compared with using generic atlases, the population specific atlases obtain a higher agreement when dealing with images from the target population. Atlas selection is used to improve segmentation accuracy. In addition to the conventional selection by image similarity ranking, atlas selection based on maximum marginal relevance (MMR) re-ranking and least angle regression (LAR) sequence are developed for atlas selection. By taking the redundancy among atlases into consideration, diversity criteria are shown to be more efficient in atlas selection which is applicable in the situation where the number of atlases to be fused is limited by the computational resources. Given the segmented hippocampal volumes, statistical shape models (SSMs) of hippocampi are built on the samples to model the shape variation among the population. The correspondence across the training samples of hippocampi is established by a groupwise optimization of the parameterized shape surfaces. The spherical parameterization of the hippocampal surfaces are flatten to facilitate the reparameterization and interpolation. The reparameterization is regularized by viscous fluid, which is solved by a fast implementation based on discrete sine transform. In order to use the hippocampal SSM to describe the shape of an unseen hippocampal surface, we developed a shape parameter estimator based on the expectationmaximization iterative closest points (EM-ICP) algorithm. A symmetric data term is included to achieve the inverse consistency of the transformation between the model and the shape, which gives more accurate reconstruction of the shape from the model. The shape prior modeled by the SSM is used in the maximum a posteriori estimation of the shape parameters, which is shown to enforce the smoothness and avoid the effect of over-fitting. In the study of the hippocampus in AD, we use the SSM to model the hippocampal shape change between the healthy control subjects and patients diagnosed with AD. We identify the regions affected by the atrophy in AD by assessing the spatial difference between the control and AD groups at each corresponding landmark. Localized shape analysis is performed on the regions exhibiting significant inter-group difference, which is shown to improve the discrimination ability of the principal component analysis (PCA) based SSM. The principal components describing the localized shape variability among the population are also shown to display stronger correlation with the decline of episodic memory scores linked to the pathology of hippocampus in AD
Albisser, Marie. "Identification of aerodynamic coefficients from free flight data". Electronic Thesis or Diss., Université de Lorraine, 2015. http://www.theses.fr/2015LORR0083.
Texto completoThe use of aerodynamic coefficients for the characterization of the behaviour of an object in flight remains one of the oldest and most emergent research project in the field of exterior ballistic. The present study investigates the identification of the aerodynamic coefficients based on measured data, gathered during free flight tests from different measurement techniques. This project deals with topics as modelling, defining and mastering parameter identification techniques best suited to the problem of the aerodynamic coefficients determination. In the frame of this study, an identification procedure was developed for the aerodynamic coefficients determination based on free flight measurements and was tested for two application cases: a re-entry space vehicle and a fin stabilized reference projectile. This procedure requires several steps such as the description of the behaviour of the vehicle in free flight as a nonlinear state-space model representation, the polynomial descriptions of the aerodynamic coefficients as function of Mach number and incidence, the a priori and a posteriori identifiability analyses, followed by the estimation of the parameters from free flight measurements. Moreover, to increase the probability that the coefficients define the vehicle’s aerodynamics over the entire range of test conditions and to improve the accuracy of the estimated coefficients, a multiple fit strategy was considered. This approach provides a common set of aerodynamic coefficients that are determined from multiple data series simultaneously analyzed, and gives a more complete spectrum of the vehicle’s motion
Vo-Van, Claudine. "Analyse de données pharmacocinétiques fragmentaires : intégration dans le développement de nouvelles molécules". Paris 5, 1994. http://www.theses.fr/1994PA05P044.
Texto completoBureik, Jan-Philipp. "Number statistics and momentum correlations in interacting Bose gases". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASP014.
Texto completoThis thesis work is dedicated to the study of number statistics and momentum correlations in interacting lattice Bose gases. The Bose-Hubbard model is simulated by loading Bose-Einstein condensates (BECs) of metastable Helium-4 atoms into a three-dimensional (3D) optical lattice. This model exhibits a quantum phase transition from a superfluid to a Mott insulator that is driven by interaction-induced quantum fluctuations. The objective of this work is to comprehend the role of these quantum fluctuations by analyzing their signatures in momentum space. The original detection scheme employed towards this aim provides the single-particle resolved momentum distribution of the atoms in 3D. From such datasets made up of thousands of individual atoms, the number statistics of occupation of different sub-volumes of momentum space yield information about correlation or coherence properties of the interacting Bose gas. At close-by momenta these occupation probabilities permit the identification of underlying pure-state statistics in the case of textbook many-body states such as lattice superfluids and Mott insulators. In the weakly-interacting regime, well-established correlations between pairs of atoms at opposite momenta are observed. Furthermore, these pair correlations are found to decrease in favor of more intricate correlations between more than two particles as interactions are increased. A direct observation of non-Gaussian correlations encapsulates the complex statistical nature of strongly-interacting superfluids well before the Mott insulator phase transition. Finally, at the phase transition, fluctuations of the occupation number of the BEC mode are found to be enhanced, constituting a direct signature of the quantum fluctuations driving the transition. System-size independent quantities such as the Binder cumulant are shown to exhibit distinctive sharp features even in a finite-size system, and hold promise for constituting suitable observables for determining universal behavior when measured in a homogeneous system
Albisser, Marie. "Identification of aerodynamic coefficients from free flight data". Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0083/document.
Texto completoThe use of aerodynamic coefficients for the characterization of the behaviour of an object in flight remains one of the oldest and most emergent research project in the field of exterior ballistic. The present study investigates the identification of the aerodynamic coefficients based on measured data, gathered during free flight tests from different measurement techniques. This project deals with topics as modelling, defining and mastering parameter identification techniques best suited to the problem of the aerodynamic coefficients determination. In the frame of this study, an identification procedure was developed for the aerodynamic coefficients determination based on free flight measurements and was tested for two application cases: a re-entry space vehicle and a fin stabilized reference projectile. This procedure requires several steps such as the description of the behaviour of the vehicle in free flight as a nonlinear state-space model representation, the polynomial descriptions of the aerodynamic coefficients as function of Mach number and incidence, the a priori and a posteriori identifiability analyses, followed by the estimation of the parameters from free flight measurements. Moreover, to increase the probability that the coefficients define the vehicle’s aerodynamics over the entire range of test conditions and to improve the accuracy of the estimated coefficients, a multiple fit strategy was considered. This approach provides a common set of aerodynamic coefficients that are determined from multiple data series simultaneously analyzed, and gives a more complete spectrum of the vehicle’s motion
Grigolon, Silvia. "Modelling and inference for biological systems : from auxin dynamics in plants to protein sequences". Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112178/document.
Texto completoAll biological systems are made of atoms and molecules interacting in a non- trivial manner. Such non-trivial interactions induce complex behaviours allow- ing organisms to fulfill all their vital functions. These features can be found in all biological systems at different levels, from molecules and genes up to cells and tissues. In the past few decades, physicists have been paying much attention to these intriguing aspects by framing them in network approaches for which a number of theoretical methods offer many powerful ways to tackle systemic problems. At least two different ways of approaching these challenges may be considered: direct modeling methods and approaches based on inverse methods. In the context of this thesis, we made use of both methods to study three different problems occurring on three different biological scales. In the first part of the thesis, we mainly deal with the very early stages of tissue development in plants. We propose a model aimed at understanding which features drive the spontaneous collective behaviour in space and time of PINs, the transporters which pump the phytohormone auxin out of cells. In the second part of the thesis, we focus instead on the structural properties of proteins. In particular we ask how conservation of protein function across different organ- isms constrains the evolution of protein sequences and their diversity. Hereby we propose a new method to extract the sequence positions most relevant for protein function. Finally, in the third part, we study intracellular molecular networks that implement auxin signaling in plants. In this context, and using extensions of a previously published model, we examine how network structure affects network function. The comparison of different network topologies provides insights into the role of different modules and of a negative feedback loop in particular. Our introduction of the dynamical response function allows us to characterize the systemic properties of the auxin signaling when external stimuli are applied
Fouchet, Arnaud. "Kernel methods for gene regulatory network inference". Thesis, Evry-Val d'Essonne, 2014. http://www.theses.fr/2014EVRY0058/document.
Texto completoNew technologies in molecular biology, in particular dna microarrays, have greatly increased the quantity of available data. in this context, methods from mathematics and computer science have been actively developed to extract information from large datasets. in particular, the problem of gene regulatory network inference has been tackled using many different mathematical and statistical models, from the most basic ones (correlation, boolean or linear models) to the most elaborate (regression trees, bayesian models with latent variables). despite their qualities when applied to similar problems, kernel methods have scarcely been used for gene network inference, because of their lack of interpretability. in this thesis, two approaches are developed to obtain interpretable kernel methods. firstly, from a theoretical point of view, some kernel methods are shown to consistently estimate a transition function and its partial derivatives from a learning dataset. these estimations of partial derivatives allow to better infer the gene regulatory network than previous methods on realistic gene regulatory networks. secondly, an interpretable kernel methods through multiple kernel learning is presented. this method, called lockni, provides state-of-the-art results on real and realistically simulated datasets
Chion, Marie. "Développement de nouvelles méthodologies statistiques pour l'analyse de données de protéomique quantitative". Thesis, Strasbourg, 2021. http://www.theses.fr/2021STRAD025.
Texto completoProteomic analysis consists of studying all the proteins expressed by a given biological system, at a given time and under given conditions. Recent technological advances in mass spectrometry and liquid chromatography make it possible to envisage large-scale and high-throughput proteomic studies.This thesis work focuses on developing statistical methodologies for the analysis of quantitative proteomics data and thus presents three main contributions. The first part proposes to use monotone spline regression models to estimate the amounts of all peptides detected in a sample using internal standards labelled for a subset of targeted peptides. The second part presents a strategy to account for the uncertainty induced by the multiple imputation process in the differential analysis, also implemented in the mi4p R package. Finally, the third part proposes a Bayesian framework for differential analysis, making it notably possible to consider the correlations between the intensities of peptides
Bouatou, Mohamed. "Estimation non linéaire par ondelettes : régression et survie". Phd thesis, Université Joseph Fourier (Grenoble), 1997. http://tel.archives-ouvertes.fr/tel-00004921.
Texto completoSoudain-Pineau, Mickaël. "Statistiques appliquées à la physiologie du sport dans l’exploration des variables influençant la performance chez les cyclistes". Reims, 2008. http://theses.univ-reims.fr/exl-doc/GED00000981.pdf.
Texto completoFirst, we studied a population of 112 cyclists divided into three levels amateurs. These subjects carried out an incremental test by three minute old stage. Anthropometric variables like physiological and physical were studied at two times, at the lactic threshold and at the maximum of the for each individual. We used discriminant analysis to obtain with the lactic threshold a linear discriminant function and in max a quadratic discriminant function made up of the most significant variables. Then, for 213 professional cyclists, we had the values of several hormones before and after an incremental test. We also had physical and physiological parameters for each athlete. We studied the behavior of these hormones and the impact of the physical and physiological parameters on the excepted values. The multiple regression analysis allowed to establish a linear model made up of the most significant parameters explaining the power. Finally, the study of the lactatemy taken at an incremental exercise and for 10 minutes after exercice for a population of professional cyclists allowed, using an existing model, to modeling the blood lactate reaction. This function describing the blood lactate evolution throughout all exercice duration allow to carry out a simulation on the return to a basal level
Slaoui, Meryem. "Analyse stochastique et inférence statistique des solutions d’équations stochastiques dirigées par des bruits fractionnaires gaussiens et non gaussiens". Thesis, Lille 1, 2019. http://www.theses.fr/2019LIL1I079.
Texto completoThis doctoral thesis is devoted to the study of the solutions of stochastic differential equations driven by additive Gaussian and non-Gaussian noises. As a non-Gaussian driving noise, we use the Hermite processes. These processes form a family of self-similar stochastic processes with stationary increments and long memory and they can be expressed as multiple Wiener-Itô integrals. The class of Hermite processes includes the well-known fractional Brownian motion which is the only Gaussian Hermite process, and the Rosenblatt process. In a first chapter, we consider the solution to the linear stochastic heat equation driven by a multiparameter Hermite process of any order and with Hurst multi-index H. We study the existence and establish various properties of its mild solution. We discuss also its probability distribution in the non-Gaussian case. The second part deals with the asymptotic behavior in distribution of solutions to stochastic equations when the Hurst parameter converges to the boundary of its interval of definition. We focus on the case of the Hermite Ornstein-Uhlenbeck process, which is the solution of the Langevin equation driven by the Hermite process, and on the case of the solution to the stochastic heat equation with additive Hermite noise. These results show that the obtained limits cover a large class of probability distributions, from Gaussian laws to distribution of random variables in a Wiener chaos of higher order. In the last chapter, we consider the stochastic wave equation driven by an additive Gaussian noise which behaves as a fractional Brownian motion in time and as a Wiener process in space. We show that the sequence of generalized variations satisfies a Central Limit Theorem and we estimate the rate of convergence via the Stein-Malliavin calculus. The results are applied to construct several consistent estimators of the Hurst index
Rossetto-Giaccherino, Vincent. "Mécanique statistique de systèmes sous contraintes : topologie de l'ADN et simulations électrostatiques". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2002. http://tel.archives-ouvertes.fr/tel-00002205.
Texto completoSomé, Sobom Matthieu. "Estimations non paramétriques par noyaux associés multivariés et applications". Thesis, Besançon, 2015. http://www.theses.fr/2015BESA2030/document.
Texto completoThis work is about nonparametric approach using multivariate mixed associated kernels for densities, probability mass functions and regressions estimation having supports partially or totally discrete and continuous. Some key aspects of kernel estimation using multivariate continuous (classical) and (discrete and continuous) univariate associated kernels are recalled. Problem of supports are also revised as well as a resolution of boundary effects for univariate associated kernels. The multivariate associated kernel is then defined and a construction by multivariate mode-dispersion method is provided. This leads to an illustration on the bivariate beta kernel with Sarmanov's correlation structure in continuous case. Properties of these estimators are studied, such as the bias, variances and mean squared errors. An algorithm for reducing the bias is proposed and illustrated on this bivariate beta kernel. Simulations studies and applications are then performed with bivariate beta kernel. Three types of bandwidth matrices, namely, full, Scott and diagonal are used. Furthermore, appropriated multiple associated kernels are used in a practical discriminant analysis task. These are the binomial, categorical, discrete triangular, gamma and beta. Thereafter, associated kernels with or without correlation structure are used in multiple regression. In addition to the previous univariate associated kernels, bivariate beta kernels with or without correlation structure are taken into account. Simulations studies show the performance of the choice of associated kernels with full or diagonal bandwidth matrices. Then, (discrete and continuous) associated kernels are combined to define mixed univariate associated kernels. Using the tools of unification of discrete and continuous analysis, the properties of the mixed associated kernel estimators are shown. This is followed by an R package, created in univariate case, for densities, probability mass functions and regressions estimations. Several smoothing parameter selections are implemented via an easy-to-use interface. Throughout the paper, bandwidth matrix selections are generally obtained using cross-validation and sometimes Bayesian methods. Finally, some additionnal informations on normalizing constants of associated kernel estimators are presented for densities or probability mass functions
Friguet, Chloé. "Impact de la dépendance dans les procédures de tests multiples en grande dimension". Phd thesis, Agrocampus - Ecole nationale supérieure d'agronomie de rennes, 2010. http://tel.archives-ouvertes.fr/tel-00539741.
Texto completoFriguet, Chloé. "Impact de la dépendance dans les procédures de tests multiples en grande dimension". Phd thesis, Rennes, Agrocampus Ouest, 2010. http://www.theses.fr/2008NSARG007.
Texto completoMotivated by issues raised by the analysis of gene expressions data, this thesis focuses on the impact of dependence on the properties of multiple testing procedures for high-dimensional data. We propose a methodology based on a Factor Analysis model for the correlation structure. Model parameters are estimated thanks to an EM algorithm and an ad hoc methodology allowing to determine the model that fits best the covariance structure is defined. Moreover, the factor structure provides a general framework to deal with dependence in multiple testing. Two main issues are more particularly considered : the estimation of _0, the proportion of true null hypotheses, and the control of error rates. The proposed framework leads to less variability in the estimation of both _0 and the number of false-positives. Consequently, it shows large improvements of power and stability of simultaneous inference with respect to existing multiple testing procedures. These results are illustrated by real data from microarray experiments and the proposed methodology is implemented in a R package called FAMT
Moreno, Betancur Margarita. "Regression modeling with missing outcomes : competing risks and longitudinal data". Thesis, Paris 11, 2013. http://www.theses.fr/2013PA11T076/document.
Texto completoMissing data are a common occurrence in medical studies. In regression modeling, missing outcomes limit our capability to draw inferences about the covariate effects of medical interest, which are those describing the distribution of the entire set of planned outcomes. In addition to losing precision, the validity of any method used to draw inferences from the observed data will require that some assumption about the mechanism leading to missing outcomes holds. Rubin (1976, Biometrika, 63:581-592) called the missingness mechanism MAR (for “missing at random”) if the probability of an outcome being missing does not depend on missing outcomes when conditioning on the observed data, and MNAR (for “missing not at random”) otherwise. This distinction has important implications regarding the modeling requirements to draw valid inferences from the available data, but generally it is not possible to assess from these data whether the missingness mechanism is MAR or MNAR. Hence, sensitivity analyses should be routinely performed to assess the robustness of inferences to assumptions about the missingness mechanism. In the field of incomplete multivariate data, in which the outcomes are gathered in a vector for which some components may be missing, MAR methods are widely available and increasingly used, and several MNAR modeling strategies have also been proposed. On the other hand, although some sensitivity analysis methodology has been developed, this is still an active area of research. The first aim of this dissertation was to develop a sensitivity analysis approach for continuous longitudinal data with drop-outs, that is, continuous outcomes that are ordered in time and completely observed for each individual up to a certain time-point, at which the individual drops-out so that all the subsequent outcomes are missing. The proposed approach consists in assessing the inferences obtained across a family of MNAR pattern-mixture models indexed by a so-called sensitivity parameter that quantifies the departure from MAR. The approach was prompted by a randomized clinical trial investigating the benefits of a treatment for sleep-maintenance insomnia, from which 22% of the individuals had dropped-out before the study end. The second aim was to build on the existing theory for incomplete multivariate data to develop methods for competing risks data with missing causes of failure. The competing risks model is an extension of the standard survival analysis model in which failures from different causes are distinguished. Strategies for modeling competing risks functionals, such as the cause-specific hazards (CSH) and the cumulative incidence function (CIF), generally assume that the cause of failure is known for all patients, but this is not always the case. Some methods for regression with missing causes under the MAR assumption have already been proposed, especially for semi-parametric modeling of the CSH. But other useful models have received little attention, and MNAR modeling and sensitivity analysis approaches have never been considered in this setting. We propose a general framework for semi-parametric regression modeling of the CIF under MAR using inverse probability weighting and multiple imputation ideas. Also under MAR, we propose a direct likelihood approach for parametric regression modeling of the CSH and the CIF. Furthermore, we consider MNAR pattern-mixture models in the context of sensitivity analyses. In the competing risks literature, a starting point for methodological developments for handling missing causes was a stage II breast cancer randomized clinical trial in which 23% of the deceased women had missing cause of death. We use these data to illustrate the practical value of the proposed approaches
Oulad, Ameziane Mehdi. "Amélioration de l'exploration de l'espace d'état dans les méthodes de Monte Carlo séquentielles pour le suivi visuel". Thesis, Ecole centrale de Lille, 2017. http://www.theses.fr/2017ECLI0007.
Texto completoIn computer vision applications, visual tracking is an important and a fundamental task. Solving the tracking problematic based on a statistical formulation in the Bayesian framework has gained great interest in recent years due to the capabilities of the sequential Monte Carlo (SMC) methods to adapt to various tracking schemes and to take into account model uncertainties. In practice, the efficiency of SMC methods strongly depends on the proposal density used to explore the state space, thus the choice of the proposal is essential. In the thesis, our approach to efficiently explore the state space aims to derive a close approximation of the optimal proposal. The proposed near optimal proposal relies on an approximation of the likelihood using a new form of likelihood based on soft detection information which is more trustworthy and requires less calculations than the usual likelihood. In comparison with previous works, our near optimal proposal offers a good compromise between computational complexity and optimality.Improving the exploration of the state space is most required in two visual tracking applications: abrupt motion tracking and multiple object tracking. In the thesis, we focus on the ability of the near optimal SMC methods to deal with abrupt motion situations and we compare them to the state-of-the-art methods proposed in the literature for these situations. Also, we extend the near optimal proposal to multiple object tracking scenarios and show the benefit of using the near optimal SMC algorithms for these scenarios. Moreover, we implemented the Local PF which partition large state spaces into separate smaller subspaces while modelling interactions
SORGENTE, ANGELA. "BENESSERE FINANZIARIO DEI GIOVANI ADULTI: QUALI METODOLOGIE DI RICERCA E TECNICHE STATISTICHE SONO NECESSARIE?" Doctoral thesis, Università Cattolica del Sacro Cuore, 2018. http://hdl.handle.net/10280/39103.
Texto completoThe general aim of this research work is to enrich the literature on emerging adults’ financial well-being with research methodologies and statistical techniques never previously applied in this research field. Specifically, the first chapter of this thesis concerns the scoping methodology, a knowledge synthesis methodology that I adopted to identify the emerging adults’ financial well-being definition, components, predictors and outcomes. The second chapter consists in the application of a new statistical technique, Latent Transition Analysis, that I used to identify subgroups of emerging adults homogeneous in their configuration of adult social markers already reached and to investigate the relation between these emerging adults’ subgroups and their financial well-being. The third chapter describes a three-step methodology to develop and validate new measurement instruments, based on the contemporary view of validity proposed in the last fifty years. This three-step procedure was here applied to develop and validate a new instrument measuring subjective financial well-being for an emerging adult target population. Finally, the fourth chapter concerns the multiple informant methodology that I applied to collect information about family financial socialization and its impact on the child’s financial well-being from mother, father and the emerging adult child.
SORGENTE, ANGELA. "BENESSERE FINANZIARIO DEI GIOVANI ADULTI: QUALI METODOLOGIE DI RICERCA E TECNICHE STATISTICHE SONO NECESSARIE?" Doctoral thesis, Università Cattolica del Sacro Cuore, 2018. http://hdl.handle.net/10280/39103.
Texto completoThe general aim of this research work is to enrich the literature on emerging adults’ financial well-being with research methodologies and statistical techniques never previously applied in this research field. Specifically, the first chapter of this thesis concerns the scoping methodology, a knowledge synthesis methodology that I adopted to identify the emerging adults’ financial well-being definition, components, predictors and outcomes. The second chapter consists in the application of a new statistical technique, Latent Transition Analysis, that I used to identify subgroups of emerging adults homogeneous in their configuration of adult social markers already reached and to investigate the relation between these emerging adults’ subgroups and their financial well-being. The third chapter describes a three-step methodology to develop and validate new measurement instruments, based on the contemporary view of validity proposed in the last fifty years. This three-step procedure was here applied to develop and validate a new instrument measuring subjective financial well-being for an emerging adult target population. Finally, the fourth chapter concerns the multiple informant methodology that I applied to collect information about family financial socialization and its impact on the child’s financial well-being from mother, father and the emerging adult child.
Moarii, Matahi. "Apprentissage de données génomiques multiples pour le diagnostic et le pronostic du cancer". Thesis, Paris, ENMP, 2015. http://www.theses.fr/2015ENMP0086/document.
Texto completoSeveral initiatives have been launched recently to investigate the molecular characterisation of large cohorts of human cancers with various high-throughput technologies in order to understanding the major biological alterations related to tumorogenesis. The information measured include gene expression, mutations, copy-number variations, as well as epigenetic signals such as DNA methylation. Large consortiums such as “The Cancer Genome Atlas” (TCGA) have already gathered publicly thousands of cancerous and non-cancerous samples. We contribute in this thesis in the statistical analysis of the relationship between the different biological sources, the validation and/or large scale generalisation of biological phenomenon using an integrative analysis of genetic and epigenetic data.Firstly, we show the role of DNA methylation as a surrogate biomarker of clonality between cells which would allow for a powerful clinical tool for to elaborate appropriate treatments for specific patients with breast cancer relapses.In addition, we developed systematic statistical analyses to assess the significance of DNA methylation variations on gene expression regulation. We highlight the importance of adding prior knowledge to tackle the small number of samples in comparison with the number of variables. In return, we show the potential of bioinformatics to infer new interesting biological hypotheses.Finally, we tackle the existence of the universal biological phenomenon related to the hypermethylator phenotype. Here, we adapt regression techniques using the similarity between the different prediction tasks to obtain robust genetic predictive signatures common to all cancers and that allow for a better prediction accuracy.In conclusion, we highlight the importance of a biological and computational collaboration in order to establish appropriate methods to the current issues in bioinformatics that will in turn provide new biological insights
Chevallier, Juliette. "Statistical models and stochastic algorithms for the analysis of longitudinal Riemanian manifold valued data with multiple dynamic". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX059/document.
Texto completoBeyond transversal studies, temporal evolution of phenomena is a field of growing interest. For the purpose of understanding a phenomenon, it appears more suitable to compare the evolution of its markers over time than to do so at a given stage. The follow-up of neurodegenerative disorders is carried out via the monitoring of cognitive scores over time. The same applies for chemotherapy monitoring: rather than tumors aspect or size, oncologists asses that a given treatment is efficient from the moment it results in a decrease of tumor volume. The study of longitudinal data is not restricted to medical applications and proves successful in various fields of application such as computer vision, automatic detection of facial emotions, social sciences, etc.Mixed effects models have proved their efficiency in the study of longitudinal data sets, especially for medical purposes. Recent works (Schiratti et al., 2015, 2017) allowed the study of complex data, such as anatomical data. The underlying idea is to model the temporal progression of a given phenomenon by continuous trajectories in a space of measurements, which is assumed to be a Riemannian manifold. Then, both a group-representative trajectory and inter-individual variability are estimated. However, these works assume an unidirectional dynamic and fail to encompass situations like multiple sclerosis or chemotherapy monitoring. Indeed, such diseases follow a chronic course, with phases of worsening, stabilization and improvement, inducing changes in the global dynamic.The thesis is devoted to the development of methodological tools and algorithms suited for the analysis of longitudinal data arising from phenomena that undergo multiple dynamics and to apply them to chemotherapy monitoring. We propose a nonlinear mixed effects model which allows to estimate a representative piecewise-geodesic trajectory of the global progression and together with spacial and temporal inter-individual variability. Particular attention is paid to estimation of the correlation between the different phases of the evolution. This model provides a generic and coherent framework for studying longitudinal manifold-valued data.Estimation is formulated as a well-defined maximum a posteriori problem which we prove to be consistent under mild assumptions. Numerically, due to the non-linearity of the proposed model, the estimation of the parameters is performed through a stochastic version of the EM algorithm, namely the Markov chain Monte-Carlo stochastic approximation EM (MCMC-SAEM). The convergence of the SAEM algorithm toward local maxima of the observed likelihood has been proved and its numerical efficiency has been demonstrated. However, despite appealing features, the limit position of this algorithm can strongly depend on its starting position. To cope with this issue, we propose a new version of the SAEM in which we do not sample from the exact distribution in the expectation phase of the procedure. We first prove the convergence of this algorithm toward local maxima of the observed likelihood. Then, with the thought of the simulated annealing, we propose an instantiation of this general procedure to favor convergence toward global maxima: the tempering-SAEM
El, Ghaziri Angélina. "Relation entre tableaux de données : exploration et prédiction". Thesis, Nantes, Ecole nationale vétérinaire, 2016. http://www.theses.fr/2016ONIR088F/document.
Texto completoThe research developed in this thesis deals with several statistical aspects for analyzing datasets. Firstly, investigations of the properties of several association indices commonly used by practitioners are undergone. Secondly, different strategies related to the standardization of the datasets with application to principal component analysis (PCA) and regression, especially PLS-regression were developed. The first strategy consists of a continuum standardization of the variables. The interest of such standardization in PCA and PLS-regression is emphasized.A more general standardization is also discussed which consists in reducing gradually not only the variances of the variables but also their correlations. Thereafter, a continuum approach was developed combining Redundancy Analysis and PLS-regression. Moreover, this new standardization inspired a biased regression model in multiple linear regression. Properties related to this approach are studied and the results are compared on the basis of case studies with those of Ridge regression. In the context of the analysis of several datasets in an exploratory perspective, the method called ComDim, has certainly raised interest among practitioners. An extension of this method for the analysis of K+1 datasets was developed. Properties related to this method, called P-ComDim, are studied and compared to Multiblock PLS. Finally, for the analysis of datasets depending on several factors, a new approach based on PLS regression is proposed
Kwadjane, Jean-Marc. "Apport de la connaissance a priori de la position de l'émetteur sur les algorithmes MIMO adaptatifs en environnement tunnel pour les métros". Thesis, Lille 1, 2014. http://www.theses.fr/2014LIL10208/document.
Texto completoThis thesis focuses on the design of adaptive algorithms for wireless communications in a multiple input multiple output (MIMO) design for subway tunnel environment. MIMO system meet the requirement of high capacity and robustness. However, these performance decreased due to the spatial correlation in tunnels. In this thesis, we studied precoding MIMO algorithms that use the channel state information (CSI) at the transmitter. Generally, these algorithms require feedback from receiver. To minimize the loss of spectral efficiency due to the reverse link, we selected from the literature precoder that reduce the feedback. We conducted a complete and realistic simulation system to evaluate the performance of these precoders taking into account several levels of quantity and quality of the CSI. For simulation, we used both theoretical and measured channels. We also assessed the impact of impulsive noise measured in the railway environment. By assuming a Cauchy law, We propose a receiver and a theoretical upper bound of the error probability of max-dmin precoder in uncorrelated environments. Finally, we proposed a precoder based on knowledge of the correlation matrix and studied the possibility of removing the return link thanks to the knowledge of the channel statistiques based on the localization in the tunnel
Corrente, Salvatore. "Hierarchy and interaction of criteria in robust ordinal regression". Doctoral thesis, Università di Catania, 2013. http://hdl.handle.net/10761/1312.
Texto completoClouvel, Laura. "Uncertainty quantification of the fast flux calculation for a PWR vessel". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS414.
Texto completoThe vessel neutron irradiation, which cannot be replaced, is one of the limiting factors for pressurized water reactor (PWR) lifetime. Surveillance programmes are therefore necessary for safety assessment and for verifying the vessel structural integrity. The quality of radiation damage prediction depends in part on the calculation of the fast neutron flux. In that sense, a lack of knowledge on the fast neutron flux will require larger safety margins on the plant lifetime affecting operating conditions and the cost of nuclear installations. To make correct decisions when designing the plant lifetime and on safety margins for PWR reactors, it is therefore essential to assess the uncertainty in vessel flux calculations. Most of the past studies on the flux uncertainty quantification are based on the methods of moments which assumes a linear output variation. This method was most commonly used because the calculation capabilities of computers prevented from conducting more accurate methods. In a non-linear case, the first order hypothesis appears insufficient for an accurate prediction of the output variance.An alternative method is the Total Monte Carlo approach (TMC) which consists in randomly sampling the input data and propagating the perturbations on the calculation chain. The advantage of this method is that it does not make any assumptions on the linear interactions or small input changes among data. It considers the probability distributions of input parameters and thus provides a more precise description of input uncertainties.It is within this context that this thesis was conducted. It consists in performing a new uncertainty assessment of the fast flux calculation for the PWR vessel considering the data of recent international nuclear libraries. The special feature of this thesis lies in the large number of uncertain parameters which are closely correlated with each other. The uncertainty on the fast flux, considering all the uncertain parameters, is finally estimated for the vessel hot spot. More generally, in this context of sensitivity analysis, we show the importance to consider the covariance matrices to propagate the input uncertainties, and to analyze the contribution of each input on a physical model. The Shapley and Johnson indices are especially used in a multicolinearity context between the inputs and the output
Moypemna, sembona Cyrille clovis. "Caractérisations des modèles multivariés de stables-Tweedie multiples". Thesis, Besançon, 2016. http://www.theses.fr/2016BESA2071/document.
Texto completoIn the framework of natural exponential families, this thesis proposes differents characterizations of multivariate multiple stables-Tweedie under "steepness" property. These models appeared in 2014 in the literature were first introduced and described in a restricted form of the normal stables-Tweedie models before extensions to multiple cases. They are composed by a fixed univariate stable-Tweedie variable having a positive domain, and the remaining random variables given the fixed one are reals independent stables-Tweedie variables, possibly different, with the same dispersion parameter equal to the fixed component. The corresponding normal stables-Tweedie models have a fixed univariate stable-Tweedie and all the others are reals Gaussian variables. Through special cases such that normal, Poisson, gamma, inverse Gaussian, multiple stables-Tweedie models are very common in applied probability and statistical studies. We first characterized the normal stable-Tweedie through their variances function or covariance matrices expressed in terms of their means vector. According to the power variance parameter values, the nature of polynomials associated with these models is deduced with the properties of the quasi orthogonal, Levy-Sheffer systems, and polynomial recurrence relations. Then, these results allowed us to characterize by function variance the largest class of multiple stables-Tweedie. Which led to a new classification, which makes more understandable the family. Finally, a extension characterization of normal stable-Tweedie by generalized variance function or determinant of variance function have been established via their infinite divisibility property and through the corresponding Monge-Ampere equations. Expressed as product of the components of the mean vector with multiple powers parameters reals, the characterization of all multivariate multiple stable- Tweedie models by generalized variance function remains an open problem
Guillet, Julien. "Caractérisation et modélisation spatio-temporelles du canal de propagation radioélectrique dans le contexte MIMO". Phd thesis, INSA de Rennes, 2004. http://tel.archives-ouvertes.fr/tel-00008011.
Texto completoKurisummoottil, Thomas Christo. "Sparse Bayesian learning, beamforming techniques and asymptotic analysis for massive MIMO". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS231.
Texto completoMultiple antennas at the base station side can be used to enhance the spectral efficiency and energy efficiency of the next generation wireless technologies. Indeed, massive multi-input multi-output (MIMO) is seen as one promising technology to bring the aforementioned benefits for fifth generation wireless standard, commonly known as 5G New Radio (5G NR). In this monograph, we will explore a wide range of potential topics in multi-userMIMO (MU-MIMO) relevant to 5G NR,• Sum rate maximizing beamforming (BF) design and robustness to partial channel stateinformation at the transmitter (CSIT)• Asymptotic analysis of the various BF techniques in massive MIMO and• Bayesian channel estimation methods using sparse Bayesian learning.One of the potential techniques proposed in the literature to circumvent the hardware complexity and power consumption in massive MIMO is hybrid beamforming. We propose a globally optimal analog phasor design using the technique of deterministic annealing, which won us the best student paper award. Further, in order to analyze the large system behaviour of the massive MIMO systems, we utilized techniques from random matrix theory and obtained simplified sum rate expressions. Finally, we also looked at Bayesian sparse signal recovery problem using the technique called sparse Bayesian learning (SBL). We proposed low complexity SBL algorithms using a combination of approximate inference techniques such as belief propagation (BP), expectation propagation and mean field (MF) variational Bayes. We proposed an optimal partitioning of the different parameters (in the factor graph) into either MF or BP nodes based on Fisher information matrix analysis