Dissertations / Theses on the topic 'Test à données aléatoires'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Test à données aléatoires.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Korneva, Alexandrina. "The Cubicle Fuzzy Loop : A Testing Framework for Cubicle." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG095.
Full textThe goal of this thesis is to integrate a testing technique into the Cubicle model checker. To do this, we extended Cubicle with a Fuzzing loop (called the Cubicle Fuzzy Loop - CFL). This new feature serves two primary purposes.Firstly, it acts as an oracle for Cubicle's invariant generation algorithm. The existing algorithm, which is based on a forward exploration of reachable states, was significantly limited by its heuristics when applied to highly concurrent models. CFL introduces amore efficient way to explore these models, visiting a larger number of relevant states.Its second objective is to quickly and efficiently detect issues and vulnerabilities in models of all sizes, as well as detect deadlocks.The integration of CFL has also enabled us to enhance the expressiveness of Cubicle's input language, including new primitives for manipulating threads (locks, semaphores, etc.).Lastly, we built a testing framework around Cubicle and CFL with an interactive interpreter, which is useful for debugging, prototyping, and step-by-step execution of models. This new system has been successfully applied in a case study of a distributed consensus algorithm for blockchains
Segalas, Corentin. "Inférence dans les modèles à changement de pente aléatoire : application au déclin cognitif pré-démence." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0298/document.
Full textThe aim of this work was to propose inferential methods to describe natural history of the pre-diagnosis phase of dementia. During this phase, which can last around fifteen years, the cognitive decline trajectories are nonlinear and heterogeneous between subjects. Because heterogeneity and nonlinearity, we chose a random changepoint mixed model to describe these trajectories. A first part of this work was to propose a testing procedure to assess the existence of a random changepoint. Indeed, in some subpopulations, the cognitive decline seems smooth and the question of the existence of a changepoint itself araises. This question is methodologically challenging because of identifiability issues on some parameters under the null hypothesis that makes standard tests useless. We proposed a supremum score test to answer this question. A second part of this work was the comparison of the temporal order of different markers changepoint. Dementia is a multidimensional disease where different dimensions of the cognition are affected. Hypothetic cascade models exist for describing this natural history but have not been evaluated on real data. Comparing change over time of different markers measuring different cognitive functions gives precious insight on this hypothesis. In this spirit, we propose a bivariate random changepoint model allowing proper comparison of the time of change of two cognitive markers, potentially non Gaussian. The proposed methodologies were evaluated on simulation studies and applied on real data from two French cohorts. Finally, we discussed the limitations of the two models we used that focused on the late acceleration of the cognitive decline before dementia diagnosis and we proposed an alternative model that estimates the time of differentiation between cases and non-cases
Bonakdar, Sakhi Omid. "Segmentation of heterogeneous document images : an approach based on machine learning, connected components analysis, and texture analysis." Phd thesis, Université Paris-Est, 2012. http://tel.archives-ouvertes.fr/tel-00912566.
Full textEbeido, Kebieh Amira. "Test d'hypothèses et modèles aléatoires autorégressifs." Paris 2, 1987. http://www.theses.fr/1987PA020091.
Full textAboa, Yapo Jean-Pascal. "Méthodes de segmentation sur un tableau de variables aléatoires." Paris 9, 2002. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2002PA090042.
Full textGardy, Danièle. "Bases de données, allocations aléatoires : quelques analyses de performances." Paris 11, 1989. http://www.theses.fr/1989PA112221.
Full textThis thesis is devoted to the analysis of some parameters of interest for estimating the performance of computer systems, most notably database systems. The unifying features are the description of the phenomena to be studied in terms of random allocations and the systematic use of methods from the average-case analysis of algorithms. We associate a generating function with each parameter of interest, which we use to derive an asymptotic expression of this parameter. The main problem studied in this work is the estimation of the sizes of derived relations in a relational database framework. We show that this is closely related to the so-called "occupancy problem" in urn models, a classical tool of discrete probability theory. We characterize the conditional distribution of the size of a relation derived from relations whose sizes are known, and give conditions which ensure the a. Symptotic normality of the limiting distribution. We next study the implementation of "logical" relations by multi-attribute or doubly chained trees, for which we give results on the complexity of a random orthogonal range query. Finally, we study some "dynamic" random allocation phenomena, such as the birthday problem, which models the occurrence of collisions in hashing, and a model of the Least Recently Used cache memory algorithm
Caron, Maxime. "Données confidentielles : génération de jeux de données synthétisés par forêts aléatoires pour des variables catégoriques." Master's thesis, Université Laval, 2015. http://hdl.handle.net/20.500.11794/25935.
Full textConfidential data are very common in statistics nowadays. One way to treat them is to create partially synthetic datasets for data sharing. We will present an algorithm based on random forest to generate such datasets for categorical variables. We are interested by the formula used to make inference from multiple synthetic dataset. We show that the order of the synthesis has an impact on the estimation of the variance with the formula. We propose a variant of the algorithm inspired by differential privacy, and show that we are then not able to estimate a regression coefficient nor its variance. We show the impact of synthetic datasets on structural equations modeling. One conclusion is that the synthetic dataset does not really affect the coefficients between latent variables and measured variables.
Chevalier, Cyril. "Contribution au test intégré : générateurs de vecteurs de test mixtes déterministes et pseudo-aléatoires." Montpellier 2, 1994. http://www.theses.fr/1994MON20141.
Full textHillali, Younès. "Analyse et modélisation des données probabilistes : capacités et lois multidimensionnelles." Paris 9, 1998. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1998PA090015.
Full textElhadji, Ille Gado Nassara. "Méthodes aléatoires pour l’apprentissage de données en grande dimension : application à l'apprentissage partagé." Thesis, Troyes, 2017. http://www.theses.fr/2017TROY0032.
Full textThis thesis deals with the study of random methods for learning large-scale data. Firstly, we propose an unsupervised approach consisting in the estimation of the principal components, when the sample size and the observation dimension tend towards infinity. This approach is based on random matrices and uses consistent estimators of eigenvalues and eigenvectors of the covariance matrix. Then, in the case of supervised learning, we propose an approach which consists in reducing the dimension by an approximation of the original data matrix and then realizing LDA in the reduced space. Dimension reduction is based on low–rank approximation matrices by the use of random matrices. A fast approximation algorithm of the SVD and a modified version as fast approximation by spectral gap are developed. Experiments are done with real images and text data. Compared to other methods, the proposed approaches provide an error rate that is often optimal, with a small computation time. Finally, our contribution in transfer learning consists in the use of the subspace alignment and the low-rank approximation of matrices by random projections. The proposed method is applied to data derived from benchmark database; it has the advantage of being efficient and adapted to large-scale data
Clément, Julien. "Algorithmes, mots et textes aléatoires." Habilitation à diriger des recherches, Université de Caen, 2011. http://tel.archives-ouvertes.fr/tel-00913127.
Full textVirazel, Arnaud. "Test intégré des circuits digitaux : analyse et génération de séquences aléatoires adjacentes." Montpellier 2, 2001. http://www.theses.fr/2001MON20094.
Full textDhayni, Achraf. "Test intégré pseudo aléatoire pour les composants microsystèmes." Grenoble INPG, 2006. https://tel.archives-ouvertes.fr/tel-00135916.
Full textThe growing use of MEMS in life-critical applications has accelerated the need for robust test methods. MEMS have complex failure mechanisms and device dynamics that are most often poorly understood. This is due to their multi-domain nature which makes them inherently complex for both design and test. Manufacturing is in addition complicated by the need of new fabrication steps in particular when System-in-Package (SiP) techniques are used. These packaging techniques enable to have a module that contains highly heterogeneous IP blocks or chips, giving important benefits in terms of time-to-market shortening and miniaturization. However, this poses many test problems. In this area, BIST techniques for analog and mixed-signal circuits have attracted considerable industrial interest for helping reduce increasing test related difficulties. In this thesis we propose a pseudorandom (PR) functional BIST for MEMS. Since the test control is necessarily electrical, electrical test sequences must be converted to the energy domain required by the MEMS. Thus, we propose the use of pseudorandom electrical pulses that have the advantage of being easily generated on-chip and the conversion to the actual energy domain has been demonstrated for different types of MEMS. We show how different types of PR sequences can be exploited within a BIST approach for both linear and nonlinear MEMS. In general, we show that two-level PR sequences are sufficient for testing both linear and nonlinear MEMS. In addition, while two-level PR sequences are sufficient for characterizing linear MEMS, we describe how the use of multilevel PR sequences is necessary for the characterization of nonlinear MEMS. The number of needed levels depends on the order of nonlinearity of the MEMS under test. The output test response is digitized using an existing on-chip self-testable ADC and a digital circuit performs some simple digital signal processing to extract Impulse Response (IR) samples for linear MEMS, or Volterra kernel samples for nonlinear MEMS. Next, these samples (called test signature) are compared with their tolerance ranges and a pass/fail signal is generated by the BIST. We use Monte Carlo simulations to derive the test signature tolerance ranges out of the specification tolerance ranges. Monte Carlo simulations are also used to form the test signature after a sensitivity analysis, and to inject parametric variations to calculate the test metrics and to optimize BIST design parameters, such as the length of the LFSR and the bit precision of digital circuitry. We have applied the PR BIST for MEMS like commercialized accelerometers and microbeams that we have fabricated. Satisfactory experimental results have been obtained
Ferrigno, Sandie. "Un test d'adéquation global pour la fonction de répartition conditionnelle." Montpellier 2, 2004. http://www.theses.fr/2004MON20110.
Full textOperto, Grégory. "Analyse structurelle surfacique de données fonctionnelles cétrébrales." Aix-Marseille 3, 2009. http://www.theses.fr/2009AIX30060.
Full textFunctional data acquired by magnetic resonance contain a measure of the activity in every location of the brain. If many methods exist, the automatic analysis of these data remains an open problem. In particular, the huge majority of these methods consider these data in a volume-based fashion, in the 3D acquisition space. However, most of the activity is generated within the cortex, which can be considered as a surface. Considering the data on the cortical surface has many advantages : on one hand, its geometry can be taken into account in every processing step, on the other hand considering the whole volume reduces the detection power of usually employed statistical tests. This thesis hence proposes an extension of the application field of volume-based methods to the surface-based domain by adressing problems such as projecting data onto the surface, performing surface-based multi-subjects analysis, and estimating results validity
Alès, de Corbet Jean-Pierre d'. "Approximation linéaire et non linéaire de fonctions aléatoires : application à la compression des images numériques." Paris 9, 1996. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1996PA090025.
Full textPoiret, Aurélien. "Équations de Schrödinger à données aléatoires : construction de solutions globales pour des équations sur-critiques." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00771354.
Full textDo, Huy Vu. "Conception testable et test de logiciels flots de données." Grenoble INPG, 2006. http://www.theses.fr/2006INPG0107.
Full textThis work concerns the testability analysis of data-flow designs of reactive systems developed by using two development environments SCADE and SIMULINK. The testability, which is used to estimate the facility to test a system, is a combination of two measures : controllability an observability. We use the SATAN technology, which is based on the information theory, to model the transfer of information in the system. The testability measures are computed from the loss of information in the system, where each operator contributes to this loss. The loss of information of an operator can be evaluatedeither exhaustively by basing on the "truth table" of the function of the operator, or statistically by basing on the simulation results of the operator. Our approach is integrated in a tool allowing an automatic analysis of testability of graphical data-flow designs of reactive systems
Poirier, Régis. "Compression de données pour le test des circuits intégrés." Montpellier 2, 2004. http://www.theses.fr/2004MON20119.
Full textBonis, Thomas. "Algorithmes d'apprentissage statistique pour l'analyse géométrique et topologique de données." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS459/document.
Full textIn this thesis, we study data analysis algorithms using random walks on neighborhood graphs, or random geometric graphs. It is known random walks on such graphs approximate continuous objects called diffusion processes. In the first part of this thesis, we use this approximation result to propose a new soft clustering algorithm based on the mode seeking framework. For our algorithm, we want to define clusters using the properties of a diffusion process. Since we do not have access to this continuous process, our algorithm uses a random walk on a random geometric graph instead. After proving the consistency of our algorithm, we evaluate its efficiency on both real and synthetic data. We then deal tackle the issue of the convergence of invariant measures of random walks on random geometric graphs. As these random walks converge to a diffusion process, we can expect their invariant measures to converge to the invariant measure of this diffusion process. Using an approach based on Stein's method, we manage to obtain quantitfy this convergence. Moreover, the method we use is more general and can be used to obtain other results such as convergence rates for the Central Limit Theorem. In the last part of this thesis, we use the concept of persistent homology, a concept of algebraic topology, to improve the pooling step of the bag-of-words approach for 3D shapes
Gregorutti, Baptiste. "Forêts aléatoires et sélection de variables : analyse des données des enregistreurs de vol pour la sécurité aérienne." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066045/document.
Full textNew recommendations require airlines to establish a safety management strategy to keep reducing the number of accidents. The flight data recorders have to be systematically analysed in order to identify, measure and monitor the risk evolution. The aim of this thesis is to propose methodological tools to answer the issue of flight data analysis. Our work revolves around two statistical topics: variable selection in supervised learning and functional data analysis. The random forests are used as they implement importance measures which can be embedded in selection procedures. First, we study the permutation importance measure when the variables are correlated. This criterion is extended for groups of variables and a new selection algorithm for functional variables is introduced. These methods are applied to the risks of long landing and hard landing which are two important questions for airlines. Finally, we present the integration of the proposed methods in the software FlightScanner implemented by Safety Line. This new solution in the air transport helps safety managers to monitor the risks and identify the contributed factors
Mirauta, Bogdan. "Etude du transcriptome à partir de données de comptages issues de séquençage haut débit." Electronic Thesis or Diss., Paris 6, 2014. http://www.theses.fr/2014PA066424.
Full textIn this thesis we address the problem of reconstructing the transcription profile from RNA-Seq reads in cases where the reference genome is available but without making use of existing annotation. In the first two chapters consist of an introduction to the biological context, high-throughput sequencing and the statistical methods that can be used in the analysis of series of counts. Then we present our contribution for the RNA-Seq read count model, the inference transcription profile by using Particle Gibbs and the reconstruction of DE regions. The analysis of several data-sets proved that using Negative Binomial distributions to model the read count emission is not generally valid. We develop a mechanistic model which accounts for the randomness generated within all RNA-Seq protocol steps. Such a model is particularly important for the assessment of the credibility intervals associated with the transcription level and coverage changes. Next, we describe a State Space Model accounting for the read count profile for observations and transcription profile for the latent variable. For the transition kernel we design a mixture model combining the possibility of making, between two adjacent positions, no move, a drift move or a shift move. We detail our approach for the reconstruction of the transcription profile and the estimation of parameters using the Particle Gibbs algorithm. In the fifth chapter we complete the results by presenting an approach for analysing differences in expression without making use of existing annotation. The proposed method first approximates these differences for each base-pair and then aggregates continuous DE regions
Lenain, Jean-François. "Comportement asymptotique des estimateurs à noyau de la densité, avec des données discrétisées, pour des suites et des chanmps aléatoires dépendants et non-stationnaires." Limoges, 1999. http://www.theses.fr/1999LIMO0034.
Full textEl, Haj Abir. "Stochastics blockmodels, classifications and applications." Thesis, Poitiers, 2019. http://www.theses.fr/2019POIT2300.
Full textThis PhD thesis focuses on the analysis of weighted networks, where each edge is associated to a weight representing its strength. We introduce an extension of the binary stochastic block model (SBM), called binomial stochastic block model (bSBM). This question is motivated by the study of co-citation networks in a context of text mining where data is represented by a graph. Nodes are words and each edge joining two words is weighted by the number of documents included in the corpus simultaneously citing this pair of words. We develop an inference method based on a variational maximization algorithm (VEM) to estimate the parameters of the modelas well as to classify the words of the network. Then, we adopt a method based on maximizing an integrated classification likelihood (ICL) criterion to select the optimal model and the number of clusters. Otherwise, we develop a variational approach to analyze the given network. Then we compare the two approaches. Applications based on real data are adopted to show the effectiveness of the two methods as well as to compare them. Finally, we develop a SBM model with several attributes to deal with node-weighted networks. We motivate this approach by an application that aims at the development of a tool to help the specification of different cognitive treatments performed by the brain during the preparation of the writing
Lumbroso, Jérémie. "Probabilistic algorithms for data streaming and random generation." Paris 6, 2012. http://www.theses.fr/2012PA066618.
Full textThis thesis examines two types of problems---that of analyzing large quantities of real data, and the complimentary problem of creating large quantities of (random) data---using a set of common tools: analytic combinatorics (and generating functions), enumeration, probabilities, probabilistic algorithms, and in particular the Boltzmann method for random generation. First, we study several data streaming algorithms: algorithms which are able to extract information from large streams of data using very limited ressources (in particular, memory and processing time per element of the stream). One of our main contributions is to provide a full analysis of an optimal algorithm to estimate the number of distinct elements in a stream, a problem which has garnered a lot of research in the past. Our second contribution, a work in common with researchers from UPC in Barcelona, is to introduce a completely novel type of estimator for the number of distinct elements, which uses statistics on permutations. The second part focuses on the random generation both of laws and combinatorial object. We introduce the first optimal algorithm for the random generation of the discrete uniform law, which is one of the most wildly used building blocks in computational simulations. We also, with Olivier Bodini, introduce an extension of the Boltzmann method to randomly generate a new kind of objects belonging to multiplicative combinatorics, which are an underexplored part of combinatorics with ties to analytic number theory. Finally we present ongoing work with Olivier Bodini on improving the practicality of the Boltzmann method
Yáñez-Godoy, Humberto. "Mise à jour de variables aléatoires à partir des données d'instrumentations pour le calcul en fiabilité de structures portuaires." Nantes, 2008. http://www.theses.fr/2008NANT2122.
Full textThis research deals with the reliability assessment of harbour structures. The structures considered are pile-supported wharfs. The behaviour of these structures presents several hazards in particular because of the difficult conditions of building and extreme loadings (storms). This last point is approached in a classical way from existing models. This research dissertation concentrates primarily on the first point. We then propose to resort to monitoring data from these structures. A state of the art in monitoring of harbour structures was carried out and led to an original strategy of instrumentation of two similar wharfs in order to analyse their behaviour under horizontal loading. That’s why tie-rods were instrumented and piezometric sensors were installed. Measurements of trajectories of stochastic fields of loads obtained from monitoring aim to model both the embankment loadings and the tie-rods stiffness; a compared statistical analysis of the loads in the operational phase of the two wharfs is then carried out. A probabilistic modelling is then proposed and an inverse analysis is carried out on the basis of mechanical models. In this phase, the probabilistic approach is based on both the identification of parameters of classical laws of probability and on the identification of parameters on polynomial chaos. An assessment of the probability of failure, by considering a limit state performance criterion, can then be carried out either in a classical way by a method of Monte-Carlo or by a non-intrusive stochastic finite element method. Computation of reliability considers the combination of both loading winter storm and high coefficient tides
Bessac, Julie. "Sur la construction de générateurs aléatoires de conditions de vent au large de la Bretagne." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S067/document.
Full textThis work is aimed at constructing stochastic weather generators. These models enable to simulate artificially weather data that have statistical properties consistent with observed meteorology and climate. Outputs of these models are generally used in impact studies in agriculture or in ecology
Biard, Lucie. "Test des effets centre en épidémiologie clinique." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCC302.
Full textCentre effects modelling within the framework of survival data often relies on the estimation of Cox mixed effects models. Testing for a centre effect consists in testing to zero the variance component of the corresponding random effect. In this framework, the identification of the null distribution of usual tests statistics is not always straightforward. Permutation procedures have been proposed as an alternative, for generalised linear mixed models.The objective was to develop a permutation test procedure for random effects in a Cox mixed effects model, for the test of centre effects.We first developed and evaluated permutation procedures for the test of a single centre effect on the baseline risk. The test was used to investigate a centre effect in a clinical trial of induction chemotherapy for patients with acute myeloid leukaemia.The second part consisted in extending the procedure for the test of multiple random effects, in survival models. The aim was to be able to examine both center effects on the baseline risk and centre effects on the effect of covariates. The procedure was illustrated on two cohorts of acute leukaemia patients. In a third part, the permutation approach was applied to a cohort of critically ill patients with hematologic malignancies, to investigate centre effects on the hospital mortality.The proposed permutation procedures appear to be robust approaches, easily implemented for the test of random centre effect in routine practice. They are an appropriate tool for the analysis of centre effects in clinical epidemiology, with the purpose of understanding their sources
Montagner, Morancho Laurence. "Nouvelle méthode de test en rétention de données de mémoires non volatiles." Phd thesis, Institut National Polytechnique de Toulouse - INPT, 2004. http://tel.archives-ouvertes.fr/tel-00135027.
Full textMontagner-Morancho, Laurence. "Nouvelle méthode de test en rétention de données de mémoires non volatiles." Toulouse, INPT, 2004. http://www.theses.fr/2004INPT027H.
Full textThe introduction of non volatile memory in Smartpower circuits has made necessary systematic 100% die data retention test. Usual tests operated on high production volume increase drastically test time. In this work, we propose a new data retention test on non volatile memory. In a first part, we present a state of the art relative to intrinsic and extrinsic NVM defects and to reliability tests. In a second part, we studied thermal NVM data retention behaviour on engineering lot ranging from ambient temperature to 300°C during 7000h. This study allows cell discrimination to validate a new data retention test which time is strongly reduced compare to the thermal one: after optimisation phases, test time will be about few seconds and then will be implemented in production flow
Saumard, Mathieu. "Contribution à l'analyse statistique des données fontionnelles." Thesis, Rennes, INSA, 2013. http://www.theses.fr/2013ISAR0009/document.
Full textIn this thesis, we are interested in the functional data. The problem of estimation in a model of estimating equations is studying. We derive a central limit type theorem for the considered estimator. The optimal instruments are estimated, and we obtain a uniform convergence of the estimators. We are then interested in various testing with functional data. We study the problem of nonparametric testing for the effect of a random functional covariate on an error term which could be directly observed as a response or estimated from a functional model like for instance the functional linear model. We proved, in order to construct the tests, a result of dimension reduction which relies on projections of the functional covariate. We have constructed no-effect tests by using a kernel smoothing or a nearest neighbor smoothing. A goodness-of-fit test in the functional linear model is also proposed. All these tests are studied from a theoretical and practical perspective
Leroux, (zinovieva) Elena. "Méthodes symboliques pour la génération de tests desystèmes réactifs comportant des données." Phd thesis, Université Rennes 1, 2004. http://tel.archives-ouvertes.fr/tel-00142441.
Full textde transitions ne permet pas de le faire. Ceci oblige à énumérer les valeurs des données avant de construire le modèle de système de transitions d'un système, ce qui peut provoquer le problème de l'explosion de l'espace d'états. Cette énumération a également pour effet d'obtenir des cas de test où toutes les données sont instanciées. Or, cela contredit la pratique industrielle où les cas de test sont de vrais programmes avec des variables et des paramètres. La génération de tels
cas de test exige de nouveaux modèles et techniques. Dans cette thèse, nous atteignons deux objectifs. D'une part, nous introduisons un modèle appelé système symbolique de transitions à entrée/sortie qui inclut explicitement toutes les données d'un système réactif. D'autre part, nous proposons et implémentons une nouvelle technique de génération de test qui traite symboliquement les données d'un système en combinant l'approche de génération de test proposée auparavant par notre groupe de recherche avec des techniques d'interprétation abstraite. Les cas de test générés automatiquement par notre technique satisfont des propriétés de correction: ils émettent toujours un verdict correct.
Gabriel, Edith. "Détection de zones de changement abrupt dans des données spatiales et application à l'agriculture de précision." Montpellier 2, 2004. http://www.theses.fr/2004MON20107.
Full textKousignian, Isabelle. "Modélisation biostatistique de données longitudinales : applications à des marqueurs immunologiques de l'infection à VIH." Paris 6, 2003. http://www.theses.fr/2003PA066173.
Full textDalmasso, Julien. "Compression de données de test pour architecture de systèmes intégrés basée sur bus ou réseaux et réduction des coûts de test." Thesis, Montpellier 2, 2010. http://www.theses.fr/2010MON20061/document.
Full textWhile microelectronics systems become more and more complex, test costs have increased in the same way. Last years have seen many works focused on test cost reduction by using test data compression. However these techniques only focus on individual digital circuits whose structural implementation (netlist) is fully known by the designer. Therefore, they are not suitable for the testing of cores of a complete system. The goal of this PhD work was to provide a new solution for test data compression of integrated circuits taking into account the paradigm of systems-on-chip (SoC) built from pre-synthesized functions (IPs or cores). Then two systems testing method using compression are proposed for two different system architectures. The first one concerns SoC with IEEE 1500 test architecture (with bus-based test access mechanism), the second one concerns NoC-based systems. Both techniques use test scheduling methods combined with test data compression for better exploration of the design space. The idea is to increase test parallelism with no hardware extra cost. Experimental results performed on system-on-chip benchmarks show that the use of test data compression leads to test time reduction of about 50% at system level
Lebouvier, Marine. "Test du modèle unitaire de dépense des ménages sur les données canadiennes de 2009." Mémoire, Université de Sherbrooke, 2016. http://hdl.handle.net/11143/9765.
Full textPeyre, Julie. "Analyse statistique des données issues des biopuces à ADN." Phd thesis, Université Joseph Fourier (Grenoble), 2005. http://tel.archives-ouvertes.fr/tel-00012041.
Full textDans un premier chapitre, nous étudions le problème de la normalisation des données dont l'objectif est d'éliminer les variations parasites entre les échantillons des populations pour ne conserver que les variations expliquées par les phénomènes biologiques. Nous présentons plusieurs méthodes existantes pour lesquelles nous proposons des améliorations. Pour guider le choix d'une méthode de normalisation, une méthode de simulation de données de biopuces est mise au point.
Dans un deuxième chapitre, nous abordons le problème de la détection de gènes différentiellement exprimés entre deux séries d'expériences. On se ramène ici à un problème de test d'hypothèses multiples. Plusieurs approches sont envisagées : sélection de modèles et pénalisation, méthode FDR basée sur une décomposition en ondelettes des statistiques de test ou encore seuillage bayésien.
Dans le dernier chapitre, nous considérons les problèmes de classification supervisée pour les données de biopuces. Pour remédier au problème du "fléau de la dimension", nous avons développé une méthode semi-paramétrique de réduction de dimension, basée sur la maximisation d'un critère de vraisemblance locale dans les modèles linéaires généralisés en indice simple. L'étape de réduction de dimension est alors suivie d'une étape de régression par polynômes locaux pour effectuer la classification supervisée des individus considérés.
Carrière, Isabelle. "Comparaisons des méthodes d'analyse des données binaires ou ordinales corrélées. Application à l'étude longitudinale de l'incapacité des personnes âgées." Phd thesis, Université Paris Sud - Paris XI, 2005. http://tel.archives-ouvertes.fr/tel-00107384.
Full textimportant en épidémiologie. L'étude longitudinale de l'incapacité des personnes âgées et la
recherche des facteurs de risque de la vie en incapacité représente un enjeu crucial de santé
publique. Dans ce contexte nous comparons les modèles logistiques marginaux et les modèles
à effets aléatoires en prenant comme réponse l'incapacité considérée comme variable binaire
afin d'illustrer les aspects suivants : choix de la structure de covariance, importance de
données manquantes et des covariables dépendantes du temps, interprétation des résultats. Le
modèle à effets aléatoires est utilisé pour construire un score prédictif de l'incapacité issu
d'une large analyse des facteurs de risque disponibles dans la cohorte Epidos. Les modèles
logistiques ordonnés mixtes sont ensuite décrits et comparés et nous montrons comment ils
permettent la recherche d'effets différenciés des facteurs sur les stades d'incapacité.
Verdie, Yannick. "Modélisation de scènes urbaines à partir de données aériennes." Thesis, Nice, 2013. http://www.theses.fr/2013NICE4078.
Full textAnalysis and 3D reconstruction of urban scenes from physical measurements is a fundamental problem in computer vision and geometry processing. Within the last decades, an important demand arises for automatic methods generating urban scenes representations. This thesis investigates the design of pipelines for solving the complex problem of reconstructing 3D urban elements from either aerial Lidar data or Multi-View Stereo (MVS) meshes. Our approaches generate accurate and compact mesh representations enriched with urban-related semantic labeling.In urban scene reconstruction, two important steps are necessary: an identification of the different elements of the scenes, and a representation of these elements with 3D meshes. Chapter 2 presents two classification methods which yield to a segmentation of the scene into semantic classes of interests. The beneath is twofold. First, this brings awareness of the scene for better understanding. Second, deferent reconstruction strategies are adopted for each type of urban elements. Our idea of inserting both semantical and structural information within urban scenes is discussed and validated through experiments. In Chapter 3, a top-down approach to detect 'Vegetation' elements from Lidar data is proposed using Marked Point Processes and a novel optimization method. In Chapter 4, bottom-up approaches are presented reconstructing 'Building' elements from Lidar data and from MVS meshes. Experiments on complex urban structures illustrate the robustness and scalability of our systems
Vrac, Mathieu. "Analyse et modélisation de données probabilistes par décomposition de mélange de copules et application à une base de données climatologiques." Phd thesis, Université Paris Dauphine - Paris IX, 2002. http://tel.archives-ouvertes.fr/tel-00002386.
Full textClaeys, Emmanuelle. "Clusterisation incrémentale, multicritères de données hétérogènes pour la personnalisation d’expérience utilisateur." Thesis, Strasbourg, 2019. http://www.theses.fr/2019STRAD039.
Full textIn many activity sectors (health, online sales,...) designing from scratch an optimal solution for a defined problem (finding a protocol to increase the cure rate, designing a web page to promote the purchase of one or more products,...) is often very difficult or even impossible. In order to face this difficulty, designers (doctors, web designers, production engineers,...) often work incrementally by successive improvements of an existing solution. However, defining the most relevant changes remains a difficult problem. Therefore, a solution adopted more and more frequently is to compare constructively different alternatives (also called variations) in order to determine the best one by an A/B Test. The idea is to implement these alternatives and compare the results obtained, i.e. the respective rewards obtained by each variation. To identify the optimal variation in the shortest possible time, many test methods use an automated dynamic allocation strategy. Its allocate the tested subjects quickly and automatically to the most efficient variation, through a learning reinforcement algorithms (as one-armed bandit methods). These methods have shown their interest in practice but also limitations, including in particular a latency time (i.e. a delay between the arrival of a subject to be tested and its allocation) too long, a lack of explicitness of choices and the integration of an evolving context describing the subject's behaviour before being tested. The overall objective of this thesis is to propose a understable generic A/B test method allowing a dynamic real-time allocation which take into account the temporals static subjects’s characteristics
Guyader, Arnaud. "Contribution aux algorithmes de décodage pour les codes graphiques." Rennes 1, 2002. http://www.theses.fr/2002REN10014.
Full textZinovieva-Leroux, Eléna. "Méthodes symboliques pour la génération de tests de systèmes réactifs comportant des données." Rennes 1, 2004. https://tel.archives-ouvertes.fr/tel-00142441.
Full textNoumon, Allini Elie. "Caractérisation, évaluation et utilisation du jitter d'horloge comme source d'aléa dans la sécurité des données." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSES019.
Full textThis thesis, funded by the DGA, is motivated by the problem of evaluation of TRNG for applications with a very high level of security. As current standards such as AIS-31 are not sufficient for these types of applications, the DGA proposes a complementary procedure, validated on TRNG using ring oscillators (RO), which aims to characterize the source of randomness of TRNG in order to identify electronic noises present in it. These noises are manifested in the digital circuits by the clock jitter generated in the RO. They can be characterized by their power spectral density related to the time Allan variance which allows, unlike the standard variance which is still widely used, to discriminate these different types of noise (mainly thermal, flicker). This study was used as a basis for estimating the proportion of jitter due to thermal noise used in stochastic models describing the output of TRNG. In order to illustrate and validate the DGA certification approach on other principles of TRNG apart from RO, we propose a characterization of PLL as a source of randomness. We have modeled the PLL in terms of transfer functions. This modeling has led to the identification of the source of noise at the output of the PLL, as well as its nature as a function of the physical parameters of the PLL. This allowed us to propose recommendations on the choice of parameters to ensure maximum entropy. In order to help in the design of this type of TRNG, we also propose a tool to search for the non-physical parameters of the generator ensuring the best compromise between security and throughput
Papailiopoulou, Virginia. "Test automatique de programmes Lustre / SCADE." Phd thesis, Grenoble, 2010. http://tel.archives-ouvertes.fr/tel-00454409.
Full textPapailiopoulou, Virginia. "Test automatique de programmes Lustre / SCADE." Phd thesis, Grenoble, 2010. http://www.theses.fr/2010GRENM005.
Full textThe work in this thesis addresses the improvement of the testing process with a view to automating test data generation as well as its quality evaluation, in the framework of reactive synchronous systems specified in Lustre/SCADE. On the one hand, we present a testing methodology using the Lutess tool that automatically generates test input data based exclusively on the environment description of the system under test. On the other hand, we are based on the SCADE model of the program under test and we define structural coverage criteria taking into account two new aspects: the use of multiple clocks as well as integration testing, allowing the coverage measurement of large-sized systems. These two strategies could have a positive impact in effectively testing real-world applications. Case studies extracted from the avionics domain are used to demonstrate the applicability of these methods and to empirically evaluate their complexity
Molinari, Isabelle. "Test de génération de thrombine sur ACL7000 (développement d'un programme de traitement des données sur Microsoft Excel et éléments d'analyse de l'intérêt du test dans les états d'hypercoagulabilité)." Bordeaux 2, 1999. http://www.theses.fr/1999BOR23102.
Full textGuedj, Mickael. "Méthodes Statistiques pour l'Analyse de Données Génétiques d'Association à Grande Echelle." Phd thesis, Université d'Evry-Val d'Essonne, 2007. http://tel.archives-ouvertes.fr/tel-00169411.
Full textAprès une description introductive des principales problématiques liées aux études d'association à grande échelle, nous abordons plus particulièrement les approches simple-marqueur avec une étude de puissance des principaux tests d'association, ainsi que de leur combinaisons. Nous considérons ensuite l'utilisation d'approches multi-marqueurs avec le développement d'une méthode d'analyse fondée à partir de la statistique du Score Local. Celle-ci permet d'identifier des associations statistiques à partir de régions génomiques complètes, et non plus des marqueurs pris individuellement. Il s'agit d'une méthode simple, rapide et flexible pour laquelle nous évaluons les performances sur des données d'association à grande échelle simulées et réelles. Enfin ce travail traite également du problème du test-multiple, lié aux nombre de tests à réaliser lors de l'analyse de données génétiques ou génomiques haut-débit. La méthode que nous proposons à partir du Score Local prend en compte ce problème. Nous évoquons par ailleurs l'estimation du Local False Discovery Rate à travers un simple modèle de mélange gaussien.
L'ensemble des méthodes décrites dans ce manuscrit ont été implémentées à travers trois logiciels disponibles sur le site du laboratoire Statistique et Génome : fueatest, LHiSA et kerfdr.
Baghi, Quentin. "Optimisation de l’analyse de données de la mission spatiale MICROSCOPE pour le test du principe d’équivalence et d’autres applications." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLEO003/document.
Full textThe Equivalence Principle (EP) is a cornerstone of General Relativity, and is called into question by the attempts to build more comprehensive theories in fundamental physics such as string theories. The MICROSCOPE space mission aims at testing this principle through the universality of free fall, with a target precision of 10-15, two orders of magnitude better than current on-ground experiments. The satellite carries on-board two electrostatic accelerometers, each one including two test-masses. The masses of the test accelerometer are made with different materials, whereas the masses of the reference accelerometer have the same composition. The objective is to monitor the free fall of the test-masses in the gravitational field of the earth by measuring their differential accelerations with an expected precision of 10-12 ms-2Hz-1/2 in the bandwidth of interest. An EP violation would result in a characteristic periodic difference between the two accelerations. However, various perturbations are also measured because of the high sensitivity of the instrument. Some of them are well defined, e.g. gravitational and inertial gradient disturbances, but others are unmodeled, such as random noise and acceleration peaks due to the satellite environment, which can lead to saturations in the measurement or data gaps. This experimental context requires us to develop suited tools for the data analysis, which are applicable in the general framework of linear regression analysis of time series.We first study the statistical detection and estimation of unknown harmonic disturbances in a least squares framework, in the presence of a colored noise of unknown PSD. We show that with this technique the projection of the harmonic disturbances onto the WEP violation signal can be rejected. Secondly we analyze the impact of the data unavailability on the performance of the EP test. We show that with the worst case before-flight hypothesis (almost 300 gaps of 0.5 second per orbit), the uncertainty of the ordinary least squares is increased by a factor 35 to 60. To counterbalance this effect, a linear regression method based on an autoregressive estimation of the noise is developed, which allows a proper decorrelation of the available observations, without direct computation and inversion of the covariance matrix. The variance of the constructed estimator is close to the optimal value, allowing us to perform the EP test at the expected level even in case of very frequent data interruptions. In addition, we implement a method to more accurately characterize the noise PSD when data are missing, with no prior model on the noise. The approach is based on modified expectation-maximization (EM) algorithm with a smooth assumption on the PSD, and use a statistical imputation of the missing data. We obtain a PSD estimate with an error less than 10-12 ms-2Hz-1/2. Finally, we widen the applications of the data analysis by studying the feasibility of the measurement of the earth's gravitational gradient with MICROSCOPE data. We assess the ability of this set-up to decipher the large scale geometry of the geopotential. By simulating the signals obtained from different models of the earth's deep mantle, and comparing them to the expected noise level, we show that their features can be distinguished
Fouchez, Dominique. "Etude de canaux de physique non-standard au LHC : analyse des données de test d'un calorimètre plomb/fibres scintillantes." Aix-Marseille 2, 1993. http://www.theses.fr/1993AIX22003.
Full text