Dissertations / Theses on the topic 'Détection par maximum de vraisemblance'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Détection par maximum de vraisemblance.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Ezzahar, Abdessamad. "Estimation et détection d'un signal contaminé par un bruit autorégressif." Phd thesis, Grenoble 1, 1991. http://tel.archives-ouvertes.fr/tel-00339831.
Full textZhu, Xuan. "Structuration automatique en locuteurs par approche acoustique." Phd thesis, Université Paris Sud - Paris XI, 2007. http://tel.archives-ouvertes.fr/tel-00624061.
Full textKazem, Ali. "Particules déterministes généralisées en filtrage non-linéaire : applications défense et télécommunications." Toulouse 3, 2008. http://thesesups.ups-tlse.fr/1638/.
Full textParticle filters are presently among the most powerful tools to estimate Markovian dynamical systems, regardless of the nature of nonlinearities and/or noise probability distributions. The purpose of this dissertation is to show the generality of deterministic particle filtering, as opposed to the former random version, which avoids randomization in the prediction stage as well as in the resampling stage after Bayesian correction. This work relies on two kinds of results: the first concerns the particle filter-based maximum likelihood estimator for sequential estimation of the state variables. The second patent, introducing deterministic particle filtering in the minimum variance sense, focuses on the current state marginal estimation using a resampling scheme consistant with the a posteriori distribution. This approach simultaneously delivers all modes (local maxima) of the marginal probability density function of the current state. The thesis focuses on several achievements in various fields: communications: The proposed particle algorithm makes possible the joint estimation of the kinematic channel parameters at the receiver side and the detection of the message transmitted by a satellite. We have also proposed several techniques for the iterative estimation and decoding of the turbo-coded message compliant with the DVB-RCS standard. Target estimation for sonar: We built a passive particle receiver only listening to its target, in order to identify its kinematic parameters. The deterministic version allows to significantly reduce the computational complexity. Radar signal processing: The first receiver , with deterministic maximum likelihood filtering, is used for the detection / tracking of steady and manoeuvering targets , when there is a very limited number of available measurements during a circular period of antenna of the radar. The second receiver applies the minimum variance technique to the ARMOR radar, confirming unusually high signal-to-noise gains. The novel deterministic technique based on minimum variance criteria can easily be extended to multitarget processing and tracking in the presence of clutter, with the incomparable complexity savings due to the deterministic technique
Keribin, Christine. "Tests de modeles par maximum de vraisemblance." Evry-Val d'Essonne, 1999. http://www.theses.fr/1999EVRY0006.
Full textSafa, Khodor. "Embracing Nonlinearities in Future Wireless Communication Systems : Low-Resolution Analog-to-Digital Converters." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG082.
Full textWith the advent of 5G networks and the road towards 6G already being established, innovative wireless communication technologies including massive multiple-inputmultiple-output (mMIMO) and millimeter-wave systems are emerging to address the ever-increasing number of mobile users and to support new industry applications. However, higher frequencies and wider bandwidths lead to increased power consumption in radio-frequency (RF) circuits, necessitating more energy-efficient components. At the same time, systems are increasingly susceptible to nonlinear impairments such as phase noise, saturation, and quantization distortions. Understanding the impact of these nonlinearities on transceiver design and fundamental limits becomes essential. This thesis focuses on the nonlinear effects of low-resolution analog-to-digital converters (ADCs) at the receiver. ADC power consumption increases with both bandwidth and resolution, making low-resolution ADCs a practical solution in systems like mMIMO, where power consumption is a major constraint. The first part of this work examines data detection in quantized flat-fading MIMO channels, with various assumptions about the channel state information (CSI). Maximum likelihood (ML) detection is optimal for minimizing the error probability but is computationally expensive. While sphere-decoding (SD) algorithms are commonly used to reduce complexity in unquantized channels, they are not directly applicable to the quantized case due to the discrete nature of observations. To address this, we propose a two-step low-complexity detection algorithm for systems with one-bit comparators. This method approximates the ML metric using a Taylor series and converts the detection problem into a classical integer least-squares optimization, allowing the use of SD algorithms. Numerical results show this approach achieves near-ML performance. This method is also extended to multi-bit scenarios, converging to conventional SD as resolution increases. In more practical scenarios, where only statistical CSI at the receiver (CSIR) is available, we explore data detection under a pilot transmission scheme. The first approach frames channel estimation and detection as binary classification tasks using probit regression. Achievable mismatched rates using the generalized mutual information are then evaluated in comparison to the Bussgang linear minimum mean-square error estimators where it's shown that the performance depends on the choice of the estimator. The second approach jointly processes data and pilot sequences where we encounter challenges related to evaluating multivariate Gaussian probabilities and the combinatorial complexity of optimization. We address the first challenge with the Laplace method that provides us with a closed-form expression, while for the complexity we adapt the SD technique from the perfect CSI case on a surrogate metric. It is of interest for future wireless systems to obtain design guidelines that can accurately explain the trade-off between spectral efficiency and energy consumption. We then investigate the capacity of quantized MIMO channels, which is difficult to characterize due to their discrete nature. Therefore, assuming an asymptotic regime where the number of receive antennas grows large and employing known information-theoretic results from Bayesian statistics on reference priors, the capacity scaling can be characterized for the coherent multi-bit case providing us with an expression that can be used for the analysis of the spectral efficiency and power consumption balance. For the noncoherent case, we apply the same results at the unquantized channel as an upper bound for the quantized channel and identify upper and lower bounds on the scaling for particular values on the coherence interval
Berrim, Selma. "Tomoscintigraphie par ouverture codée, reconstruction par le maximum de vraisemblance." Paris 13, 1998. http://www.theses.fr/1998PA132033.
Full textZakaria, Rostom. "Transmitter and receiver design for inherent interference cancellation in MIMO filter-bank based multicarrier systems." Phd thesis, Conservatoire national des arts et metiers - CNAM, 2012. http://tel.archives-ouvertes.fr/tel-00923184.
Full textKUHN, Estelle. "Estimation par maximum de vraisemblance dans des problèmes inverses non linéaires." Phd thesis, Université Paris Sud - Paris XI, 2003. http://tel.archives-ouvertes.fr/tel-00008316.
Full textKhelifi, Bruno. "Recherche de sources gamma par une méthode de Maximum de Vraisemblance :." Phd thesis, Université de Caen, 2002. http://tel.archives-ouvertes.fr/tel-00002393.
Full textGrâce à ces techniques, nous avons détecté de faibles et rapides variations de flux de Mkn 421, découvert deux nouveaux blazars IES 1959+65 et IES 1426+42.8 qui est de faible luminosité et nous avons identifié deux blazars susceptibles d'émettre au TeV. La comparaison des spectres en énergie des blazars de même redshift (Mkn 421 et Mkn 501) permet de nous affranchir de l'absorption des gamma par l'infrarouge intergalactique (IIR) : Mkn 421 semble posséder un spectre avant absorption distinct d'une loi de puissance sur au moins une nuit. La dérivation d'informations plus précises sur les blazars dépendra des futures connaissances sur l'IIR et des observations simultanées multi-longueurs d'onde.
Ayant observé des restes de supernovae contenant des plérions (IC 443, CTA 1 et CTB 80), nous avons cherché en vain une émission provenant des plérions et de l'interaction de ces restes avec des nuages moléculaires grâce au maximum de vraisemblance. Les valeurs supérieures extraites sur les plérions ont été comparées avec des modèles d'émission électromagnétique d'un spectre d'électrons accélérés. Ces comparaisons nous ont amenées à nous interroger sur les hypothèses faites dans ces modèles et sur la pertinence des plérions choisis.
Kuhn, Estelle. "Estimation par maximum de vraisemblance dans des problèmes inverses non linéaires." Paris 11, 2003. https://tel.archives-ouvertes.fr/tel-00008316.
Full textThis thesis deals with maximum likelihood estimation in inverse problems. In the tree first chapters, we consider statistical models involving missing data in a parametric framework. Chapter 1 presents a version of the EM algorithm (Expectation Maximization), which combines a stochastic approximation with a Monte Carlo Markov Chain method: the missing data are drawn from a well-chosen transition probability. The almost sure convergence of the sequence generated by the algorithm to a local maximum of the likelihood of the observations is proved. Some applications to deconvolution and change-point detection are presented. Chapter 2 deals with the application of the algorithm to nonlinear mixed effects models. Besides the estimation of the parameters, we estimate the likelihood of the model and the Fisher information matrix. We assess the performance of the algorithm, comparing the results obtained with other methods, on examples coming from pharmacocinetics and pharmacodynamics. Chapter 3 presents an application to geophysics. We perform a joint inversion between teleseismic times and velocity and between gravimetric data and density. Our point of view is innovative because we estimate the parameters of the model which were generally fixed arbitrarily. Moreover we take into account a linear relation between slowness and density. Chapter 4 deals with non parametric density estimation in missing data problems. We propose a logspline estimator of the density of the non observed data, which maximizes the observed likelihood in a logspline model. We apply our algorithm in this parametric model. We study the convergence of this estimator to the density of the non observed data, when the size of the logpline model and the number of observations tend to infinity. Some applications illustrate this method
Gilbert, Helène. "Multidimensionnalité pour la détection de gènes influençant des caractères quantitatifs : Application à l'espèce porcine." Paris, Institut national d'agronomie de Paris Grignon, 2003. http://www.theses.fr/2003INAP0007.
Full textSaidi, Yacine. "Méthodes appliquées de détection et d'estimation de rupture dans les modèles de régression." Université Joseph Fourier (Grenoble ; 1971-2015), 1986. http://tel.archives-ouvertes.fr/tel-00319930.
Full textAbdi, Moussa. "Détection multi-utilisateurs en mode CDMA." Paris, ENST, 2002. http://www.theses.fr/2002ENST0027.
Full textDiop, Cheikh Abdoulahat. "La structure multimodale de la distribution de probabilité de la réflectivité radar des précipitations." Toulouse 3, 2012. http://thesesups.ups-tlse.fr/3089/.
Full textA set of radar data gathered over various sites of the US Nexrad (Next Generation Weather Radar) S band radar network is used to analyse the probability distribution function (pdf) of the radar reflectivity factor (Z) of precipitation, P(Z). Various storm types are studied and a comparison between them is made: 1) hailstorms at the continental site of Little Rock (Arkansas), 2) peninsular and coastal convection at Miami (Florida), 3) coastal convection and land/sea transition at Brownsville (Texas), 4) tropical maritime convection at Hawaii, 5) midlatitude maritime convection at Eureka (California), 6) snowstorms from winter frontal continental systems at New York City (New York), and 7) high latitude maritime snowstorms at Middleton Island (Alaska). Each storm type has a specific P(Z) signature with a complex shape. It is shown that P(Z) is a mixture of Gaussian components, each of them being attribuable to a precipitation type. Using the EM (Expectation Maximisation) algorithm of Dempster et al. 1977, based on the maximum likelihood method, four main components are categorized in hailstorms: 1) cloud and precipitation of very low intensity or drizzle, 2) stratiform precipitation, 3) convective precipitation, and 4) hail. Each component is described by the fraction of area occupied inside P(Z) and by the two Gaussian parameters, mean and variance. The absence of hail component in maritime and coastal storms is highlighted. For snowstorms, P(Z) has a more regular shape. The presence of several components in P(Z) is linked to some differences in the dynamics and microphysics of each precipitation type. The retrieval of the mixed distribution by a linear combination of the Gaussian components gives a very stisfactory P(Z) fitting. An application of the results of the split-up of P(Z) is then presented. Cloud, rain, and hail components have been isolated and each corresponding P(Z) is converted into a probability distribution of rain rate P(R) which parameters are µR and sR2 , respectively mean and variance. It is shown on the graph (µR ,sR2) that each precipitation type occupies a specific area. This suggests that the identified components are distinct. For example, the location of snowstorms representative points indicates that snow is statistically different from rain. The P(R) variation coefficient, CVR = sR/µR is constant for each precipitation type. This result implies that knowing CVR and measuring only one of the P(R) parameters enable to determine the other one and to define the rain rate probability distribution. The influence of the coefficients a and b of the relation Z = aRb on P(R) is also discussed
Al-Khalidi, Khaldoun. "Reconstruction tomographique en géométrie conique par la technique du maximum de vraisemblance : optimisation et parallélisation." Besançon, 1996. http://www.theses.fr/1996BESA2009.
Full textGilbert, Hélène. "Multidimensionnalité pour la détection de gènes influençant des caractères quantitatifs. Application à l'espèce porcine." Phd thesis, INAPG (AgroParisTech), 2003. http://tel.archives-ouvertes.fr/tel-00005699.
Full textLes méthodologies ont été dans un premier temps caractérisées pour leurs puissances et leurs précisions d'estimation des paramètres (positions et effets des QTL) à partir de données simulées. Nous avons développé d'une part des méthodes multivariées, extrapolées de techniques décrites pour l'analyse de données issues de croisements entre populations supposées génétiquement fixées, et d'autre part des méthodes synthétiques univariées, développées à l'occasion de ce travail. Ces dernières méthodes permettent de synthétiser l'information due à la présence du (des) QTL déterminant plusieurs caractères dans une unique variable, combinaison linéaire des caractères. Le nombre de paramètres à estimer est ainsi indépendant du nombre de caractères étudiés, permettant de réduire fortement les temps de calcul par rapport aux méthodes multivariées. La stratégie retenue repose sur des techniques d'analyse discriminante. Pour chaque vecteur de positions testé, des groupes de descendants sont créés en fonction de la probabilité que les individus aient reçu l'un ou l'autre haplotype de leur père. Les matrices de (co)variance génétique et résiduelle spécifiques de la présence du (des) QTL peuvent alors être estimées. La transformation linéaire permet de maximiser le rapport de ces deux variabilités.
Les méthodes basées sur l'analyse de variables synthétiques permettent en général d'obtenir des résultats équivalents, voire meilleurs, que les stratégies multivariées. Seule l'estimation des effets des QTL et de la corrélation résiduelle entre les caractères reste inaccessible par ces méthodes. Une stratégie itérative basée sur l'analyse de variables synthétiques pour la sélection des caractères et des régions chromosomiques à analyser par les méthodes multivariées est proposée. Par ailleurs, nous avons quantité les apports des méthodologies multidimensionnelles pour la cartographie des QTL par rapport aux méthodes unidimensionnelles. Dans la majorité des cas, la puissance et la précision d'estimation des paramètres sont nettement améliorées. De plus, nous avons pu montrer qu'un QTL pléiotrope peut être discriminé de deux QTL liés, s'ils sont relativement distants.
Ces méthodologies ont été appliquées à la détection de QTL déterminant cinq caractères de composition corporelle chez le porc sur le chromosome 7. Deux groupes de QTL déterminant des types de gras différents, le gras interne et le gras externe, ont ainsi été discriminés. Pour chacun de ces groupes, les analyses multiQTL ont permis d'identifier au moins deux régions chromosomiques distinctes déterminant les caractères.
Salloum, Zahraa. "Maximum de vraisemblance empirique pour la détection de changements dans un modèle avec un nombre faible ou très grand de variables." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1008/document.
Full textIn this PHD thesis, we propose a nonparametric method based on the empirical likelihood for detecting the change in the parameters of nonlinear regression models and the change in the coefficient of linear regression models, when the number of model variables may increase as the sample size increases. Firstly, we test the null hypothesis of no-change against the alternative of one change in the regression parameters. Under null hypothesis, the consistency and the convergence rate of the regression parameter estimators are proved. The asymptotic distribution of the test statistic under the null hypothesis is obtained, which allows to find the asymptotic critical value. On the other hand, we prove that the proposed test statistic has the asymptotic power equal to 1. The epidemic model, a particular case of model with two change-points, under the alternative hypothesis, is also studied. Afterwards, we use the empirical likelihood method for constructing the confidence regions for the difference between the parameters of a two-phases nonlinear model with random design. We show that the empirical likelihood ratio has an asymptotic χ2 distribu- tion. Empirical likelihood method is also used to construct the confidence regions for the difference between the parameters of a two-phases nonlinear model with response variables missing at randoms (MAR). In order to construct the confidence regions of the parameter in question, we propose three empirical likelihood statistics : empirical likelihood based on complete-case data, weighted empirical likelihood and empirical likelihood with imputed va- lues. We prove that all three empirical likelihood ratios have asymptotically χ2 distributions. An another aim for this thesis is to test the change in the coefficient of linear regres- sion models for high-dimensional model. This amounts to testing the null hypothesis of no change against the alternative of one change in the regression coefficients. Based on the theoretical asymptotic behaviour of the empirical likelihood ratio statistic, we propose, for a deterministic design, a simpler test statistic, easier to use in practice. The asymptotic normality of the proposed test statistic under the null hypothesis is proved, a result which is different from the χ2 law for a model with a fixed variable number. Under alternative hypothesis, the test statistic diverges
Ranwez, Vincent. "Méthodes efficaces pour reconstruire de grandes phylogénies suivant le principe du maximum de vraisemblance." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2002. http://tel.archives-ouvertes.fr/tel-00843175.
Full textFerràs, Font Marc. "Utilisation des coefficients de régression linéaire par maximum de vraisemblance comme paramètres pour la reconnaissance automatique du locuteur." Phd thesis, Université Paris Sud - Paris XI, 2009. http://tel.archives-ouvertes.fr/tel-00616673.
Full textLepoutre, Alexandre. "Détection et poursuite en contexte Track-Before-Detect par filtrage particulaire." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S101/document.
Full textThis thesis deals with the study and the development of mono and multitarget tracking methods in a Track-Before-Detect (TBD) context with particle filters. Contrary to the classic approach that performs before the tracking stage a pre-detection and extraction step, the TBD approach directly works on raw data in order to jointly perform detection and tracking. Several solutions to this problem exist, however this thesis is restricted to the particular Hidden Markov Models considered in the Bayesian framework for which the TBD problem can be solved using particle filter approximations.Initially, we consider existing monotarget particle solutions and we propose several instrumental densities that allow to improve the performance both in detection and in estimation. Then, we propose an alternative approach of the monotarget TBD problem based on the target appearance and disappearance times. This new approach, in particular, allows to gain in terms of computational resources. Secondly, we investigate the calculation of the measurement likelihood in a TBD context -- necessary for the derivation of the particle filters -- that is difficult due to the presence of the target amplitude parameters that are unknown and fluctuate over time. In particular, we extend the work of Rutten et al. for the likelihood calculation to several Swerling models and to the multitarget case. Lastly, we consider the multitarget TBD problem. By taking advantage of the specific structure of the likelihood when targets are far apart from each other, we show that it is possible to develop a particle solution that considers only a particle filter per target. Moreover, we develop a whole multitarget TBD solution able to manage the target appearances and disappearances and also the crossing between targets
Khélifi, Bruno. "Recherche de sources gamma par une méthode de maximum de vraisemblance : application aux AGN et aux sources galactiques suivis par le téléescope CAT." Caen, 2002. http://www.theses.fr/2002CAEN2051.
Full textKengne, William Charky. "Détection des ruptures dans les processus causaux : application aux débits du bassin versant de la Sanaga au Cameroun." Phd thesis, Université Panthéon-Sorbonne - Paris I, 2012. http://tel.archives-ouvertes.fr/tel-00695364.
Full textLEJEUNE, BERNARD. "Modèles à erreurs composées et hétérogénéité variable : modélisation, estimation par pseudo-maximum de vraisemblance au deuxième ordre et tests de spécification." Paris 12, 1998. http://www.theses.fr/1998PA122007.
Full textThis thesis considers some econometric issues associated with the modelisation of the heterogeneity of individual behaviors when working with microeconomic panel data. Its purpose is twofold. First, to propose and discuss an extension of the standard error components model which allows to take into account and account for phenomenons of variable heterogeneity. Second, to provide, for its estimation and specification testing, a set of integrated inferential procedures taking explicitly into account a possible misspecification of the assumed form of heterogeneity. The thesis is composed of four chapters. In a very general framework which includes error components models as a special case, chapter 1 establishes the conditions under which second order pseudo-maximum likelihood estimators of a second order semi-parametric model are robust to conditional variance misspecification, and investigates their asymptotic properties. Remaining in the same general framework than chapter 1, chapter 2 describes how, from such a robust estimator, to take advantage of the 'm-testing' approach to extensively test, with or without explicit alternative, the specification of second order semi-parametric models. Chapter 3 exposes and discusses the proposed extension of the standard error components model and, using the general results obtained in the first two chapters, shows how to estimate and test this model in a robust way. Finally, chapter 4 illustrates the practical usefulness of the provided results. This illustration consists in production functions estimation and specification testing, and is based on an incomplete panel dataset of 824 french firms observed over the period 1979-1988 (5201 observations)
Pascal, Frédéric. "Détection et Estimation en Environnement non Gaussien." Phd thesis, Université de Nanterre - Paris X, 2006. http://tel.archives-ouvertes.fr/tel-00128438.
Full textElle décrit également les performances et les propriétés théoriques (SIRV-CFAR) du détecteur GLRT-LQ construits avec ces nouveaux estimateurs. En particulier, on montre l'invariance du détecteur à la loi de la texture mais également à la matrice de covariance régissant les propriétés spectrales du fouillis. Ces nouveaux détecteurs sont ensuite analysés sur des données simulées mais également testés sur des données réelles de fouillis de sol.
Grégoire, Marie-Claude. "Reconstruction d'images cardiaques dynamiques synchronisées à l'ECG par maximum de vraisemblance sur les signaux obtenus à faible taux de comptage avec une gamma-caméra." Compiègne, 1989. http://www.theses.fr/1989COMPD190.
Full textTo do a global smoothing, a processing technic of dynamic cardiac studies has been suggested, allowing the simultaneous integration of all the spatial and temporal informations. This technic uses a bayesian estimation, therefore introducing an a priori law of Gibbs. A comparative study was done on low-count simulations, using this method and other classical methods. It has shown the advantages of using all the available temporal informations, and has increased the quality of current clinical results
Lemenuel-Diot, Annabelle. "Spécification des paramètres entrant dans les modèles de mélanges non linéaires à effets mixtes par approximation de la vraisemblance : application à la détection et à l'explication d'hétérogénéités dans le domaine de la Pharmacocinétique/Pharmacodynamie." Paris 6, 2005. http://www.theses.fr/2005PA066152.
Full textFilstroff, Louis. "Contributions to probabilistic non-negative matrix factorization - Maximum marginal likelihood estimation and Markovian temporal models." Thesis, Toulouse, INPT, 2019. http://www.theses.fr/2019INPT0143.
Full textNon-negative matrix factorization (NMF) has become a popular dimensionality reductiontechnique, and has found applications in many different fields, such as audio signal processing,hyperspectral imaging, or recommender systems. In its simplest form, NMF aims at finding anapproximation of a non-negative data matrix (i.e., with non-negative entries) as the product of twonon-negative matrices, called the factors. One of these two matrices can be interpreted as adictionary of characteristic patterns of the data, and the other one as activation coefficients ofthese patterns. This low-rank approximation is traditionally retrieved by optimizing a measure of fitbetween the data matrix and its approximation. As it turns out, for many choices of measures of fit,the problem can be shown to be equivalent to the joint maximum likelihood estimation of thefactors under a certain statistical model describing the data. This leads us to an alternativeparadigm for NMF, where the learning task revolves around probabilistic models whoseobservation density is parametrized by the product of non-negative factors. This general framework, coined probabilistic NMF, encompasses many well-known latent variable models ofthe literature, such as models for count data. In this thesis, we consider specific probabilistic NMFmodels in which a prior distribution is assumed on the activation coefficients, but the dictionary remains a deterministic variable. The objective is then to maximize the marginal likelihood in thesesemi-Bayesian NMF models, i.e., the integrated joint likelihood over the activation coefficients.This amounts to learning the dictionary only; the activation coefficients may be inferred in asecond step if necessary. We proceed to study in greater depth the properties of this estimation process. In particular, two scenarios are considered. In the first one, we assume the independence of the activation coefficients sample-wise. Previous experimental work showed that dictionarieslearned with this approach exhibited a tendency to automatically regularize the number of components, a favorable property which was left unexplained. In the second one, we lift thisstandard assumption, and consider instead Markov structures to add statistical correlation to themodel, in order to better analyze temporal data
Sim, Tepmony. "Estimation du maximum de vraisemblance dans les modèles de Markov partiellement observés avec des applications aux séries temporelles de comptage." Thesis, Paris, ENST, 2016. http://www.theses.fr/2016ENST0020/document.
Full textMaximum likelihood estimation is a widespread method for identifying a parametrized model of a time series from a sample of observations. Under the framework of well-specified models, it is of prime interest to obtain consistency of the estimator, that is, its convergence to the true parameter as the sample size of the observations goes to infinity. For many time series models, for instance hidden Markov models (HMMs), such a “strong” consistency property can however be difficult to establish. Alternatively, one can show that the maximum likelihood estimator (MLE) is consistent in a weakened sense, that is, as the sample size goes to infinity, the MLE eventually converges to a set of parameters, all of which associate to the same probability distribution of the observations as for the true one. The consistency in this sense, which remains a preferred property in many time series applications, is referred to as equivalence-class consistency. The task of deriving such a property generally involves two important steps: 1) show that the MLE converges to the maximizing set of the asymptotic normalized loglikelihood; and 2) show that any parameter in this maximizing set yields the same distribution of the observation process as for the true parameter. In this thesis, our primary attention is to establish the equivalence-class consistency for time series models that belong to the class of partially observed Markov models (PMMs) such as HMMs and observation-driven models (ODMs)
Sim, Tepmony. "Estimation du maximum de vraisemblance dans les modèles de Markov partiellement observés avec des applications aux séries temporelles de comptage." Electronic Thesis or Diss., Paris, ENST, 2016. http://www.theses.fr/2016ENST0020.
Full textMaximum likelihood estimation is a widespread method for identifying a parametrized model of a time series from a sample of observations. Under the framework of well-specified models, it is of prime interest to obtain consistency of the estimator, that is, its convergence to the true parameter as the sample size of the observations goes to infinity. For many time series models, for instance hidden Markov models (HMMs), such a “strong” consistency property can however be difficult to establish. Alternatively, one can show that the maximum likelihood estimator (MLE) is consistent in a weakened sense, that is, as the sample size goes to infinity, the MLE eventually converges to a set of parameters, all of which associate to the same probability distribution of the observations as for the true one. The consistency in this sense, which remains a preferred property in many time series applications, is referred to as equivalence-class consistency. The task of deriving such a property generally involves two important steps: 1) show that the MLE converges to the maximizing set of the asymptotic normalized loglikelihood; and 2) show that any parameter in this maximizing set yields the same distribution of the observation process as for the true parameter. In this thesis, our primary attention is to establish the equivalence-class consistency for time series models that belong to the class of partially observed Markov models (PMMs) such as HMMs and observation-driven models (ODMs)
Obara, Tiphaine. "Modélisation de l’hétérogénéité tumorale par processus de branchement : cas du glioblastome." Electronic Thesis or Diss., Université de Lorraine, 2016. http://www.theses.fr/2016LORR0186.
Full textThe latest advances in cancer research are paving the way to better treatments. However, some tumors such as glioblastomas remain among the most aggressive and difficult to treat. The cause of this resistance could be due to a sub-population of cells with characteristics common to stem cells. Many mathematical and numerical models on tumor growth already exist but few take into account the tumor heterogeneity. It is now a real challenge. This thesis focuses on the dynamics of different cell subpopulations in glioblastoma. It involves the development of a mathematical model of tumor growth based on a multitype, age-dependent branching process. This model allows to integrate cellular heterogeneity. Numerical simulations reproduce the evolution of different types of cells and simulate the action of several therapeutic strategies. A method of parameters estimation based on the pseudo-maximum likelihood has been developed. This approach is an alternative to the maximum likelihood in the case where the sample distribution is unknown. Finally, we present the biological experiments that have been implemented in order to validate the numerical model
Obara, Tiphaine. "Modélisation de l’hétérogénéité tumorale par processus de branchement : cas du glioblastome." Thesis, Université de Lorraine, 2016. http://www.theses.fr/2016LORR0186/document.
Full textThe latest advances in cancer research are paving the way to better treatments. However, some tumors such as glioblastomas remain among the most aggressive and difficult to treat. The cause of this resistance could be due to a sub-population of cells with characteristics common to stem cells. Many mathematical and numerical models on tumor growth already exist but few take into account the tumor heterogeneity. It is now a real challenge. This thesis focuses on the dynamics of different cell subpopulations in glioblastoma. It involves the development of a mathematical model of tumor growth based on a multitype, age-dependent branching process. This model allows to integrate cellular heterogeneity. Numerical simulations reproduce the evolution of different types of cells and simulate the action of several therapeutic strategies. A method of parameters estimation based on the pseudo-maximum likelihood has been developed. This approach is an alternative to the maximum likelihood in the case where the sample distribution is unknown. Finally, we present the biological experiments that have been implemented in order to validate the numerical model
Jay, Sylvain. "Estimation et détection en imagerie hyperspectrale : application aux environnements côtiers." Phd thesis, Ecole centrale de Marseille, 2012. https://theses.hal.science/docs/00/78/99/45/PDF/These_jay.pdf.
Full textThis thesis deals with estimation and supervised detection issues in hyperspectral imagery, applied in coastal environments. Bathymetric models of reflectance are used for modeling the water column influence on the incident light. Various parameters are optically active and are responsible for distorting the reflectance spectrum (phytoplankton, colored dissolved organic matter. . . ). We adopt a new statistical approach for estimating these parameters, which are usually retrieved by inverting physical models. Various methods such as maximum likelihood estimation, maximum a posteriori estimation, and Cramér-Rao bound calculation, are successfully implemented on simulated and real data. Moreover, we adapt the frequently used supervised detectors to the underwater target detection context. If some parameters describing the water column influence are unknown, we propose a new filter, based on the generalized likelihood ratio test, and that enables the detection without any a priori knowledge on these parameters
Jay, Sylvain. "Estimation et détection en imagerie hyperspectrale : application aux environnements côtiers." Phd thesis, Ecole centrale de Marseille, 2012. http://tel.archives-ouvertes.fr/tel-00789945.
Full textPujol, Nicolas. "Développement d’approches régionales et multivariées pour la détection de non stationnarités d’extrêmes climatiques. Applications aux précipitations du pourtour méditerranéen français." Montpellier 2, 2008. http://www.theses.fr/2008MON20090.
Full textAt the same time as the global warming context, the vulnerability in front of extreme hydrological events is in constant increase, notably in France. If the effect of the climate change on the maximal temperatures is turned out, its impact on the regime of strong rains is not established, and we wonder, for some years, about the perception of an increasing number of extreme events: badly mastered urbanization, excessive media coverage or climate change? In this context, the objective of this thesis is double. In the first place, to define statistical tools allowing the regional study of the stationnarity of climatic extremes. Indeed, the climate change is a large-scale phenomenon which should have an impact at the regional scale. Secondly, to study the stationnarity of the intense Mediterranean precipitations. Two approaches are proposed here. The first one allows to detect significant local changes at the regional scale. The second consists in looking for a regional tendency which is common to all the stations of a same zone. This second method is based on the use of multivariate copula which allow to take formally into account the spatial dependence of the data. The estimation by maximum likelihood method is then realized via the genetic algorithms. So, 92 precipitation series of the French Mediterranean region are examined. An increase of annual maxima is observed on a meridian band going from the Ariège to the Corrèze, and also in the Massif Central, Cevennes and the mountains of Roussillon
Zaïdi, Abdelhamid. "Séparation aveugle d'un mélange instantané de sources autorégressives gaussiennes par la méthode du maximum de vraissemblance exact." Université Joseph Fourier (Grenoble), 2000. http://www.theses.fr/2000GRE10233.
Full textDib, Stephanie. "Distribution de la non-linéarité des fonctions booléennes." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4090/document.
Full textAmong the different criteria that a Boolean function must satisfy in symmetric cryptography, we focus on the nonlinearity of these. This notion measures the Hamming distance between a given function and the set of functions with degree at most 1. It is a natural criterion to evaluate the complexity of a cryptographic function that must not have a simple approximation as by a function of degree 1, or more generally, a function of low degree. Hence, it is important to consider the higher order nonlinearity, which for a given order r, measures the distance between a given function and the set of all functions of degree at most r. This notion is equally important for multi-output Boolean functions. When the number of variables is large enough, almost all Boolean functions have nonlinearities lying in a small neighbourhood of a certain high value. We prove that this fact holds when considering the second-order nonlinearity. Our method which consists in observing how the Hamming balls pack the hypercube of Boolean functions led quite naturally to a theoretical decoding bound for the first-order Reed-Muller code, coinciding with the concentration point of the nonlinearity of almost all functions. This was a new approach for a result which is not entirely new. We also studied the nonlinearity of multi-output functions. We proved with a different approach, that the asymptotic behaviour of multi-output functions is the same as the single-output ones: a concentration of the nonlinearity around a certain large value
Presles, Benoit. "Caractérisation géométrique et morphométrique 3-D par analyse d'image 2-D de distributions dynamiques de particules convexes anisotropes. Application aux processus de cristallisation." Phd thesis, Ecole Nationale Supérieure des Mines de Saint-Etienne, 2011. http://tel.archives-ouvertes.fr/tel-00782471.
Full textEstève, Loïc. "Etude de la violation de symétrie CP à l'aide du canal de désintégration B0→rho0rho0 dans l'expérience BABAR." Paris 6, 2008. https://tel.archives-ouvertes.fr/tel-00444148.
Full textPresles, Benoît. "Caractérisation géométrique et morphométrique 3-D par analyse d'image 2-D de distributions dynamiques de particules convexes anisotropes. Application aux processus de cristallisation." Thesis, Saint-Etienne, EMSE, 2011. http://www.theses.fr/2011EMSE0632/document.
Full textSolution crystallization processes are widely used in the process industry as separation and purification operations and are expected to produce solids with desirable properties. The properties concerning the size and the shape are known to have a considerable impact on the final quality of products. Hence, it is of main importance to be able to determine the granulometry of the crystals (CSD) in formation. By using an in situ camera, it is possible to visualize in real time the 2D projections of the 3D particles in the suspension.The projection of a 3D object on a 2D plane necessarily involves a loss of information. Determining the size and the shape of a 3D object from its 2D projections is therefore not easy. This is the main goal of this work: to characterize geometrically and morphometrically 3D objects from their 2D projections. First of all, a method based on the maximum likelihood estimation of the probability density functions of projected geometrical measurements has been developed to estimate the size of 3D convex objects. Then, a stereological shape descriptor based on shape diagrams has been proposed. It enables to characterize the shape of a 3D convex object independently of its size and has notably been used to estimate the value of the anisotropy factors of the 3D convex objects. At last, a combination of the two previous studies has allowed to estimate both the size and the shape of the 3D convex objects. This method has been validated with simulated data, has been compared to a method from the literature and has been used to estimate size distributions of ammonium oxalate particles crystallizing in water that have been compared to other CSD methods
Trachi, Youness. "On induction machine faults detection using advanced parametric signal processing techniques." Thesis, Brest, 2017. http://www.theses.fr/2017BRES0103/document.
Full textThis Ph.D. thesis aims to develop reliable and cost-effective condition monitoring and faults detection architectures for induction machines. These architectures are mainly based on advanced parametric signal processing techniques. To analyze and detect faults, a parametric stator current model under stationary conditions has been considered. It is assumed to be multiple sinusoids with unknown parameters in noise. This model has been estimated using parametric techniques such as subspace spectral estimators and maximum likelihood estimator. A fault severity criterion based on the estimation of the stator current frequency component amplitudes has also been proposed to determine the induction machine failure level. A novel faults detector based on hypothesis testing has been also proposed. This detector is mainly based on the generalized likelihood ratio test detector with unknown signal and noise parameters. The proposed parametric techniques have been evaluated using experimental stator current signals issued from induction machines under two considered faults: bearing and broken rotor bars faults.Experimental results show the effectiveness and the detection ability of the proposed parametric techniques
Allouche, Tahar. "Epistemic Approval Voting : Applications to Crowdsourcing Data Labeling." Electronic Thesis or Diss., Université Paris sciences et lettres, 2022. http://www.theses.fr/2022UPSLD056.
Full textIn epistemic approval voting, there is a hidden ground truth, and voters select the alternatives which, according to their beliefs, can correspond to the ground truth. These votes are then aggregated to estimate it. We first focus on tracking a simple truth, where exactly one alternative is correct. We advocate using aggregation rules that assign more weight to voters who select fewer alternatives, as they tend to be more accurate. This yields novel methods backed by theoretical results and experiments on image annotation datasets. Second, we consider cases where the ground truth contains multiple alternatives (e.g., notes in a chord, objectively best applicants). The size of the output can be either a prior knowledge on the number of true alternatives, or an exogenous constraint bearing on the output of the rule regardless ofthe true size of the ground truth. We propose suitable solution concepts for each of these two interpretations
Queguiner, Emeline. "Analysis of the data of the EDELWEISS-LT experiment searching for low-mass WIMP." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE1196/document.
Full textMany astrophysical and cosmological observations lead to postulate the existence of an unknown matter, called dark matter. Ordinary matter can explain only 5 % of the energy content of the Universe : the main components would be the dark energy (70 %) and dark matter (25 %). This latter is invisible and manifest itself only via its gravitational effects. Several particles, grouped under the generic term of WIMP (Weakly Interacting Massive Particles), could correspond to this theory and are actively searched. Many experiments have been developed for this purpose and are based on three strategies: the production of these particles with colliders, the observation of the particles produced by their annihilation in astrophysical objects or the direct detection of these particles via their interaction with the nucleus of the atoms constituent of a detector. It is on this last method that the EDELWEISS experiment is based. It is a dark matter direct detection experiment dedicated to the search for WIMP with masses between 1 GeV and 1 TeV. Its primary purpose is to detect nuclear recoils induced by elastic scattering of dark matter particles in detectors. Since the expected event rates < 10 /(kg.year) are several orders of magnitude lower than those induced by ambient radioactivity, a double measurement of ionization and heat is used to discriminate electron-induced recoils arising from β and γ interactions from WIMP-induced nuclear recoils. In addition, the experiment was placed underground to guard against cosmic radiation, inducing events in the detectors. These are germanium bolometers, called FID, cooled to cryogenic temperatures (18 mK) and operating at low field (1 V/cm). Since 2015, the new strategy of the experiment consists of focusing on WIMPs with mass below 10 GeV, an interessant research area where experiments using cryogenic detectors can exploit their ability to operate with experimental thresholds well below 1 keV. The operation of the experiment has been improved to achieve this goal. The aim of this thesis is to analyze the data set recorded by EDELWEISS in 2015 and 2016. These used the FID detectors subjected to a greater electric field than previously to improve their sensitivity. It is expected that the limit on the spin-independent WIMP-nucleon crosssection extracted from these data will be greatly impacted by a dominant background, called heat-only events. That is why they are studied in detail in this work. They are characterized by a rise in heat seen by thermal sensors without any ionization signal on the collecting electrodes. This study resulted in to highlight a model for these events that can be used in the WIMP search analyses. Following these results, a maximum likelihood analysis was constructed. This method of analysis makes it possible to statistically subtract the background noise from the experiment by exploiting the difference between the energy spectra of signal and backgrounds. In this way, limits on the spin-independent WIMP-nucleon cross-section are obtained. They will be compared to the results of other experiments
Fréret, Sandy. "Essais empiriques sur l'existence d'interactions horizontales en termes de dépenses publiques." Phd thesis, Université Rennes 1, 2008. http://tel.archives-ouvertes.fr/tel-00354575.
Full textPapakonstantinou, Konstantinos. "Les applications du traitement du signal statistique à la localisation mobile." Paris, Télécom ParisTech, 2010. http://www.theses.fr/2010ENST0041.
Full textIn this work we attack the problem of mobile terminal (MT) location estimation in NLoS environments. Traditional localization methods are 2-step processes: In the 1st step a set of location-dependent parameters (LDP) is estimated. In the 2nd step, the MT location is estimated by finding the position that best fits the LDP estimates. For the 1st step we have developed a high-resolution low-complexity LDP estimation algorithm (4D Unitary ESPRIT) for MIMO-OFDM systems, to estimate the angles of arrival (AoA), the angles of departure (AoD), the delays (ToA) and the Doppler shifts (DS) of the multipath components (MPC). As far as the second step of localization is concerned, we developed several hybrid methods applicable to NLoS environments. In the NLoS localization problem, mapping the LDP estimates to the location of the MT is not trivial. To this end, we utilize static and dynamic geometrical channel models (eg. SBM). The 2 great advantages of the SBM-based methods are the identifiability even for cases when LDP estimates are available for only 2 MPC and the remarkable performance for cases when the channel is richer. Due to these great advantages, we consider SBM-based methods to be an appealing solution for the NLoS localization problem. Moreover, we have developed a direct location estimation (DLE) method for MIMO-OFDM systems. In contrast to traditional methods, DLE estimates the MT location directly from the received signal. Its main advantage is the enhanced accuracy at low to medium signal-to-noise ratio (SNR) and/or with small number of data samples, as demonstrated by our results
Avilès, Cruz Carlos. "Analyse de texture par statistiques d'ordre superieur : caracterisation et performances." Grenoble INPG, 1997. http://www.theses.fr/1997INPG0001.
Full textPoss, Stèphane. "Étalonnage de l'algorithme d'étiquetage de la saveur par la mesure de sin (2b) dans LHCb." Phd thesis, Université de la Méditerranée - Aix-Marseille II, 2009. http://tel.archives-ouvertes.fr/tel-00430205.
Full textBachoc, François. "Estimation paramétrique de la fonction de covariance dans le modèle de Krigeage par processus Gaussiens : application à la quantification des incertitudes en simulation numérique." Phd thesis, Paris 7, 2013. http://www.theses.fr/2013PA077111.
Full textThe parametric estimation of the covariance function of a Gaussian process is studied, in the framework of the Kriging model. Maximum Likelihood and Cross Validation estimators are considered. The correctly specified case, in which the covariance function of the Gaussian process does belong to the parametric set used for estimation, is first studied in an increasing-domain asymptotic framework. The sampling considered is a randomly perturbed multidimensional regular grid. Consistency and asymptotic normality are proved for the estimators. It is then put into evidence that strong perturbations of the regular grid are always beneficial to Maximum Likelihood estimation. The incorrectly specified case, in which the covariance function of the Gaussian process does not belong to the parametric set used for estimation, is then studied. It is shown that Cross Validation is more robust than Maximum Likelihood in this case. Finally, two applications of the Kriging model with Gaussian processes are carried out on industrial data. For a validation problem of the friction model of the thermal-hydraulic code FLICA 4, where experimental results are available, it is shown that Gaussian process modeling of the FLICA 4 code model error enables to considerably improve its predictions. Finally, for a metamodeling problem of the GERMINAL thermal-mechanical code, the interest of the Kriging model with Gaussian processes, compared to neural network methods, is shown
Bachoc, François. "Estimation paramétrique de la fonction de covariance dans le modèle de Krigeage par processus Gaussiens : application à la quantification des incertitudes en simulation numérique." Phd thesis, Université Paris-Diderot - Paris VII, 2013. http://tel.archives-ouvertes.fr/tel-00881002.
Full textRodriguez, Valcarce Willy. "Estimation de l’histoire démographique des populations à partir de génomes entièrement séquencés." Thesis, Toulouse, INSA, 2016. http://www.theses.fr/2016ISAT0048/document.
Full textThe rapid development of DNA sequencing technologies is expanding the horizons of population genetic studies. It is expected that genomic data will increase our ability to reconstruct the history of populations.While this increase in genetic information will likely help biologists and anthropologists to reconstruct the demographic history of populations, it also poses big challenges. In some cases, simplicity of the model maylead to erroneous conclusions about the population under study. Recent works have shown that DNA patterns expected in individuals coming from structured populations correspond with those of unstructured populations with changes in size through time. As a consequence it is often difficult to determine whether demographic events such as expansions or contractions (bottlenecks) inferred from genetic data are real or due to the fact that populations are structured in nature. Moreover, almost no inferential method allowing to reconstruct pastdemographic size changes takes into account structure effects. In this thesis, some recent results in population genetics are presented: (i) a model choice procedure is proposed to distinguish one simple scenario of population size change from one of structured population, based on the coalescence times of two genes, showing that for these simple cases, it is possible to distinguish both models using genetic information form one single individual; (ii) by using the notion of instantaneous coalescent rate, it is demonstrated that for any scenario of structured population or any other one, regardless how complex it could be, there always exists a panmitic scenario with a precise function of population size changes havingexactly the same distribution for the coalescence times of two genes. This not only explains why spurious signals of bottlenecks can be found in structured populations but also predicts the demographic history that actual inference methods are likely to reconstruct when applied to non panmitic populations. Finally, (iii) a method based on a Markov process is developed for inferring past demographic events taking the structure into account. This is method uses the distribution of coalescence times of two genes to detect past demographic changes instructured populations from the DNA of one single individual. Some applications of the model to genomic data are discussed
Donnet, Sophie. "Inversion de données IRMf : estimation et sélection de modèles." Paris 11, 2006. http://www.theses.fr/2006PA112193.
Full textThis thesis is devoted to the analysis of functional Magnetic Resonance Imaging data (fMRI). In the framework of standard convolution models, we test a model that allows for the variation of the magnitudes of the hemodynamic reponse. To estimate the parameters of this model, we have to resort to the Expectation-Maximisation algorithm. We test this model against the standard one --withconstant magnitudes-- on several real data, set by a likelihood ratio test. The linear model suffers from a lack of biological basis, hence we consider a physiological model. In this framework, we describe the data as the sum of a regression term, defined as the non-analytical solution of an ordinary differentiel equation (ODE) depending on random parameters, and a gaussian observation noise. We develop a general method to estimate the parameters of a statistical model defined by ODE with non-observed parameters. This method, integrating a numerical resolution of the ODE, relies on a stochastic version of the EM algorithm. The convergence of the algorithm is proved and the error induced by the numerical solving method is controlled. We apply this method on simulated and real data sets. Subsequently, we consider statistical models defined by stochastic differential equations (SDE) depending on random parameters. We approximate the diffusion process by a numerical scheme and propose a general estimation method. Results of a pharmacokineticmixed model study (on simulated and real data set) illustrate the accuracy of the estimation and the relevance of the SDE approach. Finally, the identifiability of statistical models defined by SDE with random parameters is studied