Дисертації з теми "Parfums – Méthodes statistiques – Analyse"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Parfums – Méthodes statistiques – Analyse".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Jallat, Jérôme. "Mise au point d'une méthode sensorielle d'analyse descriptive des odeurs de parfumerie." Ecole nationale du génie rural, des eaux et des forêts (Paris ; Nancy ; 1968-2006), 1995. http://www.theses.fr/1995ENGR0006.
Повний текст джерелаThe creation of a perfume is an artisanal process: a creator. The perfumer, composes "a juice" using a greater or lesser number of natural or synthetic natural or synthetic raw materials. The process is in the sense that the perfumer uses his or her know-how and personal taste to create an original work of art. But perfume is also a product intended for consumption, which must please and be sold to the greatest number. It is therefore necessary for perfumers to have a tool for communicating with communication with consumers, in order to create a juice as close as possible to market trends". Dialogue between perfumers and consumers is generally ensured by the marketing departments of perfume companies. But the absence lack of any real method of descriptive odor analysis leads these departments to not the fragrance itself. It therefore appears necessary to devise a method for descriptive odor analysis to enable effective dialogue between the various players in a perfume company. Our approach is in line with this need. We carried out our work as part of a CIFRE contract, within the sensory analysis laboratory of the Bourjois company. This company was one of the first companies in its field to develop sensory analysis as a tool for research into fragrance perception
Loustau, Sébastien. "Performances statistiques de méthodes à noyaux." Phd thesis, Université de Provence - Aix-Marseille I, 2008. http://tel.archives-ouvertes.fr/tel-00343377.
Повний текст джерелаLes méthodes de régularisation ont montrées leurs intérêts pour résoudre des problèmes de classification. L'algorithme des Machines à Vecteurs de Support (SVM) est aujourd'hui le représentant le plus populaire. Dans un premier temps, cette thèse étudie les performances statistiques de cet algorithme, et considère le problème d'adaptation à la marge et à la complexité. On étend ces résultats à une nouvelle procédure de minimisation de risque empirique pénalisée sur les espaces de Besov. Enfin la dernière partie se concentre sur une nouvelle procédure de sélection de modèles : la minimisation de l'enveloppe du risque (RHM). Introduite par L.Cavalier et Y.Golubev dans le cadre des problèmes inverses, on cherche à l'appliquer au contexte de la classification.
Pavoine, Sandrine. "Méthodes statistiques pour la mesure de la biodiversité." Lyon 1, 2005. http://www.theses.fr/2005LYO10230.
Повний текст джерелаAllain, Guillaume. "Prévision et analyse du trafic routier par des méthodes statistiques." Toulouse 3, 2008. http://thesesups.ups-tlse.fr/351/.
Повний текст джерелаThe industrial partner of this work is Mediamobile/V-trafic, a company which processes and broadcasts live road-traffic information. The goal of our work is to enhance traffic information with forecasting and spatial extending. Our approach is sometimes inspired by physical modelling of traffic dynamic, but it mainly uses statistical methods in order to propose self-organising and modular models suitable for industrial constraints. In the first part of this work, we describe a method to forecast trafic speed within a time frame of a few minutes up to several hours. Our method is based on the assumption that traffic on the a road network can be summarized by a few typical profiles. Those profiles are linked to the users' periodical behaviors. We therefore make the assumption that observed speed curves on each point of the network are stemming from a probabilistic mixture model. The following parts of our work will present how we can refine the general method. Medium term forecasting uses variables built from the calendar. The mixture model still stands. Additionnaly we use a fonctionnal regression model to forecast speed curves. We then introduces a local regression model in order to stimulate short-term trafic dynamics. The kernel function is built from real speed observations and we integrate some knowledge about traffic dynamics. The last part of our work focuses on the analysis of speed data from in traffic vehicles. These observations are gathered sporadically in time and on the road segment. The resulting data is completed and smoothed by local polynomial regression
Mestre, Olivier. "Méthodes statistiques pour l'homogénéisation de longues séries climatiques." Toulouse 3, 2000. http://www.theses.fr/2000TOU30165.
Повний текст джерелаGu, Co Weila Vila. "Méthodes statistiques et informatiques pour le traitement des données manquantes." Phd thesis, Conservatoire national des arts et metiers - CNAM, 1997. http://tel.archives-ouvertes.fr/tel-00808585.
Повний текст джерелаFrikha, Mohamed. "Analyse économétrique de la dette extérieure de la Tunisie." Paris 2, 1991. http://www.theses.fr/1991PA020045.
Повний текст джерелаThe objective of this thesis is to propose an econometric model that allows the determination of the mecanisms connecting external financing to the tunsian economy. Its purpose is to describe the relationship between the foreign capital and the macroeconomic variables in order to conceive economic policies that may be used for a better management of the foreign debt. This model results from a confrontation between theories and empirical observation. The analysis of the facts and the existing models as well as their underlying theoritical bases has enabled us to determine the theoriticalstructure of the model. This concerns a structural dynamic model based on simultaneous equations. The estimate and first conclusions of the model demonstrate to un that the policy of endebtment in tunisia must fit in with a global choise of the economic policy based on promoting exports, making the most of savings, the profitability of projects and the balanced trade
Colin, Igor. "Adaptation des méthodes d’apprentissage aux U-statistiques." Thesis, Paris, ENST, 2016. http://www.theses.fr/2016ENST0070/document.
Повний текст джерелаWith the increasing availability of large amounts of data, computational complexity has become a keystone of many machine learning algorithms. Stochastic optimization algorithms and distributed/decentralized methods have been widely studied over the last decade and provide increased scalability for optimizing an empirical risk that is separable in the data sample. Yet, in a wide range of statistical learning problems, the risk is accurately estimated by U-statistics, i.e., functionals of the training data with low variance that take the form of averages over d-tuples. We first tackle the problem of sampling for the empirical risk minimization problem. We show that empirical risks can be replaced by drastically computationally simpler Monte-Carlo estimates based on O(n) terms only, usually referred to as incomplete U-statistics, without damaging the learning rate. We establish uniform deviation results and numerical examples show that such approach surpasses more naive subsampling techniques. We then focus on the decentralized estimation topic, where the data sample is distributed over a connected network. We introduce new synchronous and asynchronous randomized gossip algorithms which simultaneously propagate data across the network and maintain local estimates of the U-statistic of interest. We establish convergence rate bounds with explicit data and network dependent terms. Finally, we deal with the decentralized optimization of functions that depend on pairs of observations. Similarly to the estimation case, we introduce a method based on concurrent local updates and data propagation. Our theoretical analysis reveals that the proposed algorithms preserve the convergence rate of centralized dual averaging up to an additive bias term. Our simulations illustrate the practical interest of our approach
Ansaldi, Nadine. "Contributions des méthodes statistiques à la quantification de l'agrément de conduite." Université de Marne-la-Vallée, 2002. http://www.theses.fr/2002MARN0137.
Повний текст джерелаQuantifying a customers’ drivability requirement consists in determining some measurable representation criteria of the customers’ feel of drivability as well as ranges of optimal values (tolerance intervals) for each criterion. In this document, a methodology for quantifying customer’s requirements is presented. All steps, from planning the experiments to validating the representation criteria and their tolerance intervals, are described. The customers’ feel is the result of the stimuli he receives, which can be recorded by some physical measurements. We suggest to extract from the measurements an as comprehensive as possible list of potential criteria, by means of an automatic tool for generation and extraction. The quantification process also requires sensory assessment of the customers’ feel of drivability. If a paired comparisons method is used, the preferences analysis can be carried out by Multi-Dimensionnal Scaling models. In order to determine the criteria (quantitative data) which are representative of customers’ feel (qualitative datum), a discriminant analysis based on L1 criterion and levels grouping, is proposed. Once the representation criteria are identified, a tolerance computation tool is proposed for determining the ranges of values for each criterion in order to obtain best possible customers’ feel. Finally, results of this methodology are presented for quantifying the customers’ fell of “take off” for robotized gearbox powertrain
Guedj, Mickaël. "Méthodes Statistiques pour l’analyse de données génétiques d’association à grande échelle." Evry-Val d'Essonne, 2007. http://www.biblio.univ-evry.fr/theses/2007/2007EVRY0015.pdf.
Повний текст джерелаThe increasing availability of dense Single Nucleotide Polymorphisms (SNPs) maps due to rapid improvements in Molecular Biology and genotyping technologies have recently led geneticists towards genome-wide association studies with hopes of encouraging results concerning our understanding of the genetic basis of complex diseases. The analysis of such high-throughput data implies today new statistical and computational problematic to face, which constitute the main topic of this thesis. After a brief description of the main questions raised by genome-wide association studies, we deal with single-marker approaches by a power study of the main association tests. We consider then the use of multi-markers approaches by focusing on the method we developed which relies on the Local Score. Finally, this thesis also deals with the multiple-testing problem: our Local Score-based approach circumvents this problem by reducing the number of tests; in parallel, we present an estimation of the Local False Discovery Rate by a simple Gaussian mixed model
Skalli, Housseini Abdelhalim. "Validation de l'analyse discriminante barycentrique : codage en analyse des données : contribution à l'économie, à l'écologie, à la neurophysiologie et à la cytologie." Paris 6, 1987. http://www.theses.fr/1987PA066627.
Повний текст джерелаMarchaland, Catherine. "Analyse statistique d'un tableau de notes : comparaisons d'analyses factorielles." Paris 5, 1987. http://www.theses.fr/1987PA05H123.
Повний текст джерелаBourbousson, Pascal-Mousselard Hélène. "Etude de l'arôme de Mourvèdre : essai de classification par les méthodes statistiques multidimensionnelles." Aix-Marseille 3, 1992. http://www.theses.fr/1992AIX30001.
Повний текст джерелаVallee, Polneau Sandrine. "Applications des outils de statistiques classiques et exactes en diabétologie." Paris 5, 2003. http://www.theses.fr/2003PA05P640.
Повний текст джерелаStatistics is essential in biology to analyse data. There are many statistical analyse tools but the difficulty is to choose the best one which can answer the question asked. The compute science development makes easier calculations and it becomes possible to do exacts statistical tests. In the first part about exact statistics, we have calculated errors due to the approximation of discrete laws by the gaussian one in differents situations and have compared the results in two contexts: approximated and exact. Our results show a non linear precision rise with the sample size. The difference of precision between approximated and exact methods is low. The aim of the second part was to analyse age related hemoglobin A1c usual values and the influence of body mass index on HbA1c values in impaired glucose tolerance subjects. We show that age but not body mass index is one of the factors that can explain hemoglobin A1c variations in the population
Poulin, Leboeuf Laurence. "Analyse statistique des facteurs climatiques et géomorphologiques associés aux mouvements de terrain dans les argiles des mers post-glaciaires au Québec méridional." Master's thesis, Université Laval, 2020. http://hdl.handle.net/20.500.11794/66412.
Повний текст джерелаThomas, Frédéric. "Analyse méthodologique de la rente touristique." Nice, 2003. http://www.theses.fr/2003NICE0060.
Повний текст джерелаThe general objective of this research is to develop a generalizable methodology of evaluating of the distribution of the rent of a tourist event, based mainly on bank deposits and credits of the Bank of France, and also to show its sectoral, geographical and temporal spillovers. This research is within the theoretical framework of the sustainable development concept; this is why one sticks to the notion of rent and not to that of impact. To achieve this, it appears necessary for us to recall the social, technological and industrial (concentration, segmentation, specialization) evolutions of the tourist phenomenon, i. E. , its aspects at public and private levels. This not only highlights the past and present weak awakening of the value of the factorial endowment of qualitative nature within the political and entrepreneurial decisions, but also in the field of economic evaluation. A review of the various economic models of tourism underlines the difficulties of integrating them into these models, and similarly to measure the distribution of the tourist rent. If the safeguarding of the natural and cultural assets can possibly slow down the speed of the returns on investment, it then remains a major factor of development. In a holistic approach, recommended within the framework of the analysis of tourism activity, an inter and multi-field methodology represent the most adapted method to study the sustainability of the tourist activity, its repercussions and its tangible and intangible specificities
Royer, Jean-Jacques. "Analyse multivariable et filtrage des données régionalisées." Vandoeuvre-les-Nancy, INPL, 1988. http://www.theses.fr/1988NAN10312.
Повний текст джерелаUrieli, Assaf. "Analyse syntaxique robuste du français : concilier méthodes statistiques et connaissances linguistiques dans l'outil Talismane." Phd thesis, Université Toulouse le Mirail - Toulouse II, 2013. http://tel.archives-ouvertes.fr/tel-00979681.
Повний текст джерелаNiang, Ibrahima. "Quantification et méthodes statistiques pour le risque de modèle." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1015/document.
Повний текст джерелаIn finance, model risk is the risk of loss resulting from using models. It is a complex risk which recover many different situations, and especially estimation risk and risk of model misspecification. This thesis focuses: on model risk inherent in yield and credit curve construction methods and the analysis of the consistency of Sobol indices with respect to stochastic ordering of model parameters. it is divided into three chapters. Chapter 1 focuses on model risk embedded in yield and credit curve construction methods. We analyse in particular the uncertainty associated to the construction of yield curves or credit curves. In this context, we derive arbitrage-free bounds for discount factor and survival probability at the most liquid maturities. In Chapter 2 of this thesis, we quantify the impact of parameter risk through global sensitivity analysis and stochastic orders theory. We analyse in particular how Sobol indices are transformed further to an increase of parameter uncertainty with respect to the dispersive or excess wealth orders. Chapter 3 of the thesis focuses on contrast quantile index. We link this latter with the risk measure CTE and then we analyse on the other side, in which circumstances an increase of a parameter uncertainty in the sense of dispersive or excess wealth orders implies and increase of contrast quantile index. We propose finally an estimation procedure for this index. We prove under some conditions that our estimator is consistent and asymptotically normal
Archimbaud, Aurore. "Méthodes statistiques de détection d’observations atypiques pour des données en grande dimension." Thesis, Toulouse 1, 2018. http://www.theses.fr/2018TOU10001/document.
Повний текст джерелаThe unsupervised outlier detection is a crucial issue in statistics. More specifically, in the industrial context of fault detection, this task is of great importance for ensuring a high quality production. With the exponential increase in the number of measurements on electronic components, the concern of high dimensional data arises in the identification of outlying observations. The ippon innovation company, an expert in industrial statistics and anomaly detection, wanted to deal with this new situation. So, it collaborated with the TSE-R research laboratory by financing this thesis work. The first chapter presents the quality control context and the different procedures mainly used in the automotive industry of semiconductors. However, these practices do not meet the new expectations required in dealing with high dimensional data, so other solutions need to be considered. The remainder of the chapter summarizes unsupervised multivariate methods for outlier detection, with a particular emphasis on those dealing with high dimensional data. Chapter 2 demonstrates that the well-known Mahalanobis distance presents some difficulties to detect the outlying observations that lie in a smaller subspace while the number of variables is large. In this context, the Invariant Coordinate Selection (ICS) method is introduced as an interesting alternative for highlighting the structure of outlierness. A methodology for selecting only the relevant components is proposed. A simulation study provides a comparison with benchmark methods. The performance of our proposal is also evaluated on real industrial data sets. This new procedure has been implemented in an R package, ICSOutlier, presented in Chapter 3, and in an R shiny application (package ICSShiny) that makes it more user-friendly. When the number of dimensions increases, the multivariate scatter matrices turn out to be singular as soon as some variables are collinear or if their number exceeds the number of individuals. However, in the presentation of ICS by Tyler et al. (2009), the scatter estimators are defined as positive definite matrices. Chapter 4 proposes three different ways for adapting the ICS method to singular scatter matrices and theoretically investigates their properties. The question of affine invariance is analyzed in particular. Finally, the last chapter is dedicated to the algorithm developed for the company. Although the algorithm is confidential, the chapter presents the main ideas and the challenges, mostly numerical, encountered during its development
Liu, Zhanhao. "Méthodes statistiques et variationnelles de modélisation préalable au contrôle de procédés industriels." Electronic Thesis or Diss., Université de Lorraine, 2021. http://www.theses.fr/2021LORR0086.
Повний текст джерела***Part 1 of the thesis is subject to unlimited confidentiality*** In this thesis, we aim to propose a methodology for analyzing industrial processes based on a large quantity of process data with statistical and variational methods. The objective is to identify the key factors which ensure the good functioning of industrial processes. It is a preliminary step for the automatic generation of process control law from the collected data. In the first part of this thesis, we presented the statistical analysis of a process of Saint-Gobain. At first, we applied some classical statistical tools (Principal component analysis, clustering and so on) to the process data, and linked the obtained information with the functioning of the process. Then, we analyzed a product quality measure (called target) with the collected process parameters. The target is weakly correlated with the parameters, so the hypothesis of linear model is rejected. A restricted list of parameters which contribute to the explaination of the target was identified by the statistical methods and validated by our industrial interlocutors. Afterwards, we tested a non-linear model method: the generalized additive model (GAM). The introduction of the non-linear terms improved the performance of our model, but it remained insufficient for the future applications. Following the intuition of the process engineers and operators, we focused on a noisy signal, tracked regularly in the plants, characterizing the good functioning of the process, and restoring the missing information of this signal may improve the model. In the second part of this thesis, we developed a total variation restoration method with an automatic choice of hyper-parameter. Furthermore, our proposition of hyper-parameter has a similar performance as the existing methods, and our estimation method of both hyper-parameter and restoration is well fitted for the real time processing of a large quantity of data. Based on the proposed method, we have developed some applications of pattern restoration and discontinuities detection for several industrial processes
Achouch, Ayman. "Analyse économétrique des prix des métaux : une application multivariée des méthodes statistiques aux séries temporelles." Montpellier 1, 1998. http://www.theses.fr/1998MON10025.
Повний текст джерелаDatcu, Octaviana. "Méthodes de chiffrement/déchiffrement utilisant des systèmes chaotiques : Analyse basée sur des méthodes statistiques et sur la théorie du contrôle des systèmes." Phd thesis, Université de Cergy Pontoise, 2012. http://tel.archives-ouvertes.fr/tel-00802659.
Повний текст джерелаVaret, Suzanne. "Développement de méthodes statistiques pour la prédiction d'un gabarit de signature infrarouge." Phd thesis, Université Paul Sabatier - Toulouse III, 2010. http://tel.archives-ouvertes.fr/tel-00511385.
Повний текст джерелаArbache, Chafik. "Méthodes statistiques et informatiques d'aides à la décision alternative à la régression et à la discrimination." Paris 6, 1986. http://www.theses.fr/1986PA066008.
Повний текст джерелаParis, Adéline. "Contrôle de qualité des anodes de carbone à partir de méthodes statistiques multivariées." Master's thesis, Université Laval, 2020. http://hdl.handle.net/20.500.11794/40352.
Повний текст джерелаPrimary aluminum is produced through the Hall-Héroult process. Carbon anodes are used in this electrolytic process to provide the carbon source for the reaction and to distribute electrical current across the cells. Anode quality influences cell performance. However,increasing raw material variability has rendered the production of high-quality anodes more difficult. The objective of this project is to improve carbon anode quality control before baking by using anode electrical resistivity measurements. Multivariate statistical methods were applied to create two types of models: predictive and explanatory. For a given aggregate, the optimum pitch demand (OPD) is the amount of pitch that yields the best anode properties. High raw material variability causes the OPD to change more frequently, which makes it difficult to add the correct amount of pitch. This can lead to post-baking sticking problems when the optimum is exceeded. A soft sensor was developed based on a principal component analysis (PCA). The integrity of the correlation structure,as measured by the Squared Prediction Error (SPE), appears to break down during high-risk periods for anode sticking. The soft sensor was also tested on data collected during pitch optimization experiments.A sequential multi-block PLS model (SMB-PLS) was developed to determine which parameters influence anode resistivity. Raw material properties, anode formulation and process parameters collectively explain 54 % of the variability in the anode resistivity measurements.The model shows that coke and pitch properties have the greatest impact on green anode electrical resistivity. In addition, the main relationships between process variables implied by the model agree with the relevant literature and process knowledge.
Baumont, Catherine. "Contribution à l'analyse des espaces urbains multicentriques : la localisation résidentielle : étude théorique et empirique." Dijon, 1990. http://www.theses.fr/1990DIJOE004.
Повний текст джерелаThe thesis is divided into three parts. The first part is devoted to the analysis of multicenter urban spaces integration in spatial analysis. Both fuzzy and non fuzzy characteristics of them are taken into account. In the second part we try to solve the problem of spatial equilibrium of household in a multicenter urban pattern and we construct two models : a standard model and a fuzzy model. Then in the third part we present an econometric study based on the models described in the second part. Dijon is the urban area chosen for the test. The fuzzy approach allows us to bring an interesting economoc solution of household location in multicenter urban spaces
Zeng, Shan. "Comparaison et analyse statistique des propriétés nuageuses dérivées des instruments POLDER et MODIS dans le cadre de l’expérience spatiale A-Train." Thesis, Lille 1, 2011. http://www.theses.fr/2011LIL10067/document.
Повний текст джерелаThe A-Train observations provide an unprecedented opportunity for synchronous monitoring of the entire atmosphere including clouds at the global scale. In this study we illustrate a statistical analysis and comparisons of cloud cover, thermodynamic phase and cloud optical thickness mainly derived from the coincident POLDER (Polarization_and_Directionality_of_the_Earth_Reflectance), and MODIS (MODerate_Resolution_Imaging Spectroradiometer) sensors in the A-Train constellation. We presented first the results of an extensive study of the regional and seasonal variations of cloud cover from POLDER and MODIS and discuss the possible factors leading to differences between them, among which are the spatial resolution, aerosols, cirrus and particular surfaces. Cloud top phase products were then compared and discussed in view of cloud vertical structure and optical properties derived simultaneously from collocated CALIOP (Cloud-Aerosol_Lidar_with_Orthogonal_Polarization, another A-Train member) observations, which allow to identify and qualify potential biases present in the 3 considered dataset. Among those, we discussed the impact of observed geometries, thin cirrus, aerosols, snow/ice surfaces, multilayer and fractional cloud cover on global statistics of cloud phase derived from POLDER and MODIS passive measurements. Based on these analyses we selected cloud retrievals of high confidence to study the global and regional vertical ice-water transition and the variations of this transition with cloud formation and development regimes, particularly the impact of large-scale dynamics and cloud microphysics.Cloud optical thicknesses were finally studied. The impacts of spatial resolution, cloud microphysics and heterogeneity are mainly discussed for the understanding of the significant biases on optical thickness from the two sensors
Pellay, François-Xavier. "Méthodes d'estimation statistique de la qualité et méta-analyse de données transcriptomiques pour la recherche biomédicale." Thesis, Lille 1, 2008. http://www.theses.fr/2008LIL10058/document.
Повний текст джерелаTo understand the biological phenomena taking place in a cell under physiological or pathological conditions, it is essential to know the genes that it expresses Measuring genetic expression can be done with DNA chlp technology on which are set out thousands of probes that can measure the relative abundance of the genes expressed in the cell. The microarrays called pangenomic are supposed to cover all existing proteincoding genes, that is to say currently around thirty-thousand for human beings. The measure, analysis and interpretation of such data poses a number of problems and the analytlcal methods used will determine the reliability and accuracy of information obtained with the microarrays technology. The aim of thls thesis is to define methods to control measures, improve the analysis and deepen interpretation of microarrays to optimize their utilization in order to apply these methods in the transcriptome analysis of juvenile myelomocytic leukemia patients, to improve the diagnostic and understand the biological mechanisms behind this rare disease. We thereby developed and validated through several independent studies, a quality control program for microarrays, ace.map QC, a software that improves biological Interpretations of microarrays data based on genes ontologies and a visualization tool for global analysis of signaling pathways. Finally, combining the different approaches described, we have developed a method to obtain reliable biological signatures for diagnostic purposes
Vo-Van, Claudine. "Analyse de données pharmacocinétiques fragmentaires : intégration dans le développement de nouvelles molécules." Paris 5, 1994. http://www.theses.fr/1994PA05P044.
Повний текст джерелаAbrard, Frédéric. "Méthodes de séparation aveugle de sources et applications : des statistiques d'ordre supérieur à l'analyse temps-fréquence." Toulouse 3, 2003. http://www.theses.fr/2003TOU30137.
Повний текст джерелаBailleul, Marc. "Analyse statistique implicative : variables modales et contribution des sujets : application a la modelisation de l'enseignant dans le systeme didactique." Rennes 1, 1994. http://www.theses.fr/1994REN10061.
Повний текст джерелаPires, Sandrine. "Application des méthodes multi-échelles aux effets de lentille gravitationnelles faibles : reconstruction et analyse des cartes de matière noire." Paris 11, 2008. http://www.theses.fr/2008PA112256.
Повний текст джерелаModern cosmology refers to the physical study of Universe and is based on a cosmological model that deals with the structure of the Universe, its origins and its evolution. Over the last decades, observations have provide evidence for both dark matter dark energy. Universe has been found to be dominated by these two components whose composition remains a mystery. Weak gravitational lensing provides a direct way to probe dark matter can be used to map the dark matter distribution. Furthermore, weak lensing effect is believed to be the more promising tool to understand the nature of dark matter and dark energy and then to constrain the cosmological model. New weak lensing surveys, more and more accurate, are already planned that will cover a large fraction of the sky. But a significant effort should be done to improve the current analyses. In this thesis, in order to improve the weak lensing data processing, we suggest to use new methods of analysis: the multiscale methods, that make it possible to transform a signal in a way that facilitates its analysis. We have been interested in the reconstruction and analysis of the dark matter mass map. First, we have developed a new method to deal with the missing data. Second, we have suggested a new filtering method that makes the dark matter mass map reconstruction better. Last, we have introduced a new statistic to set tighter constraints in the cosmological model
Arciniegas, Mosquera Andrés Felipe. "Analyse de méthodes statistiques en traitement du signal pour les tomographies acoustique et ultrasonore des arbres sur pied." Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4746/document.
Повний текст джерелаAcoustic tomography is an imaging technique used to perform two-dimensional mappings of the radial-transverse plane of trees, based on the velocity (or the slowness) of low frequency elastic waves (<20 kHz). The images currently obtained with the commercial devices have a low spatial resolution (of the order of a few centimeters) and are difficult to interpret. These resolution is limited by the use of low frequency waves, the low number of sensors and the fact of not taking into account the properties of wood (anisotropy, heterogeneity). To date, there are no field devices using ultrasound, specially adapted for standing trees imaging. Taking into account the limitations mentioned previously, we present hereby two studies that aim to improve the quality of acoustic and ultrasonic tomography images. In the first part of this work we compare signal-processing methods for the measurement of the propagation time and we specify experimental limits of validity. The approach developed permitted to choose the signal-processing methods by characterizing their systematic and random errors associated with the noise level. In the second part of this work, a numerical study of the robustness of reconstruction algorithms is proposed. Two new reconstruction algorithms are presented and compared to two conventional algorithms used in the commercial devices. These comparison is based on the criteria related to experimental constraints (low number of sensors and noisy measurements) and technical requirements (low computation time) for use in the field
Wehrlé, Pascal. "Aspects des analyses multifactorielles et des plans d'expériences appliqués à l'optimisation et à la validation de formes et de procédés galéniques : étude de la lubrification d'un comprimé soluble, étude du procédé de granulation humide." Paris 11, 1990. http://www.theses.fr/1990PA114827.
Повний текст джерелаBoudin, Florian. "Exploration d'approches statistiques pour le résumé automatique de texte." Phd thesis, Université d'Avignon, 2008. http://tel.archives-ouvertes.fr/tel-00419469.
Повний текст джерелаNous proposons une première approche pour la production de résumé dans le domaine spécialisé de la Chimie Organique. Un prototype nommé YACHS a été déve- loppé pour démontrer la viabilité de notre approche. Ce système est composé de deux modules, le premier applique un pré-traitement linguistique particulier afin de tenir compte de la spécificité des documents de Chimie Organique tandis que le second sélectionne et assemble les phrases à partir de critères statistiques dont certains sont spécifiques au domaine. Nous proposons ensuite une approche répondant à la problématique du résumé automatique multi-documents orienté par une thématique. Nous détaillons les adaptations apportées au système de résumé générique Cortex ainsi que les résultats observés sur les données des campagnes d'évaluation DUC. Les résultats obtenus par la soumission du LIA lors des participations aux campagnes d'évaluations DUC 2006 et DUC 2007 sont discutés. Nous proposons finalement deux méthodes pour la génération de résumés mis-à-jour. La première approche dite de maximisation- minimisation a été évaluée par une participation à la tâche pilote de DUC 2007. La seconde méthode est inspirée de Maximal Marginal Relevance (MMR), elle a été évaluée par plusieurs soumissions lors de la campagne TAC 2008.
Laignelet, Marion. "Associer analyse syntaxique et analyse discursive pour le repérage automatique d’informations potentiellement obsolescentes dans des documents encyclopédiques." Toulouse 2, 2009. https://tel.archives-ouvertes.fr/tel-00461579.
Повний текст джерелаThe question of document updating arises in many areas. It is central to the field of encyclopaedia publishing : encyclopaedias must be constantly checked in order not to put forward wrong or time-altered information. We describe the implementation of a prototype of an aid to updating. Its aims is to automatically locate zones of text in which information might be obsolescent. The method we propose takes into account various linguistic and discursive cues calling on different levels of analysis. As obsolescence is a non-linguistic phenomenon, our hypothesis is that linguistic and discursive cues must be considered in terms of combinations. Our corpus is first manually annotated by experts for zones of obsolescence. We then apply automatic tagging of a large number of linguistic, discursive and structural cues onto the annotated corpus. A machine learning system is then implemented to bring out relevant cue configurations in the obsolescent segments characterized by the experts. Both our objectives have been achieved : we propose a detailed description of obsolescence in our corpus of encyclopaedic texts as well as a prototype aid to updating. A double evaluation was carried out : by cross validation on the corpus used for machine learning and by experts on a test corpus. Results are encouraging. They lead us to an evolution of the definition of obsolescent segments, first, on the basis of the "discoveries" emerging from our corpora and also through interaction with the needs of the experts with respect to an aid to updating. The results also show limits in the automatic tagging of the linguistic and discursive cues
Thirion, Bertrand. "Analyse de données d' IRM fonctionnelle : statistiques, information et dynamique." Phd thesis, Télécom ParisTech, 2003. http://tel.archives-ouvertes.fr/tel-00457460.
Повний текст джерелаJarrah, Adil. "Développement de méthodes statistiques et probabilistes en corrosion par piqûres pour l’estimation de la profondeur maximale : application à l’aluminium A5." Paris, ENSAM, 2009. http://www.theses.fr/2009ENAM0024.
Повний текст джерелаPitting corrosion is one of the most prevalent forms of corrosion. It affects all materials and takes place in a very important economic context. Pits can manifest locally over the structure and leads to its deterioration particularly in the presence of mechanical solicitations. The stochastic aspect of the phenomenon led to the development of statistical methods in order to characterize it. This characterization is often done through the estimation of maximum pit depth in the aim to assess the risk of perforation. For this aim, the method of Gumbel is often used. The objective of this work is to check the conditions of application of this method notably the independence and compare it with the approaches based on law of generalized extreme values and the peak over threshold. The condition of independence is verified using spatial process. An adaptation of the spectral analysis in the context of pitting corrosion is also proposed. The comparison between the approaches is based on numerical simulations which the parameters come from the experimentation
Paris, Nicolas. "Formalisation algorithmique des classements au tennis : mise en perspective longitudinale par simulation probabiliste." Bordeaux 2, 2008. http://www.theses.fr/2008BOR21603.
Повний текст джерелаViaud, Gautier. "Méthodes statistiques pour la différenciation génotypique des plantes à l’aide des modèles de croissance." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC020/document.
Повний текст джерелаPlant growth models can be used in order to predict quantities of interest or assess the genotypic variability of a population of plants; this dual use is emphasized throughout this work.Three plant growth models are therefore considered (LNAS for sugar beet and wheat, GreenLab for Arabidopsis thaliana) within the mathematical framework of general state space models.A new generic computing platform for modelling and statistical inference (ADJUSTIN’) has been developed in Julia, allowing to simulate the plant growth models considered as well as the use of state-of-the-art estimation techniques such as Markov chain Monte Carlo and sequential Monte Carlo methods.Statistical inference within plant growth models is of primary importance for concrete applications such as yield prediction, parameter and state estimation methods within general state-space models in a Bayesian framework were first studied and several case studies for the plants considered are then investigated in the case of an individual plant.The characterization of the variability of a population of plants is envisioned through the distributions of parameters using Bayesian hierarchical models. This approach requiring the acquisition of numerous data for each individual, a segmentation-tracking algorithm for the analysis of images of Arabidopsis thaliana, obtained thanks to the Phenoscope, a high-throughput phenotyping platform of INRA Versailles, is proposed.Finally, the interest of using Bayesian hierarchical models to evidence the variability of a population of plants is discussed. First through the study of different scenarios on simulated data, and then by using the experimental data acquired via image analysis for the population of Arabidopsis thaliana comprising 48 individuals
Persyn, Elodie. "Analyse d’association de variants génétiques rares dans une population démographiquement stable." Thesis, Nantes, 2017. http://www.theses.fr/2017NANT1016/document.
Повний текст джерелаGenome-wide association studies have identified many common risk alleles for a wide variety of complex diseases. However these common variants explain a very small part of the heritability. A hypothesis is the presence of rare genetic variants with stronger effects. Testing the association of those rare variants is challenging due to their low frequency in populations. Many statistical methods have been developed with the strategy to aggregate the information for a group a rare variants. This thesis aims to compare the main strategies through simulating under various genetic scenarios and the application to real sequencing data. We also developed a statistical test, called DoEstRare, which can detect clustered disease-risk variants in local genetic regions, by comparing the position distributions between cases and controls. Moreover, it has been shown that population stratification represents a confounding factor in the analysis interpretations for rare variants. With the recruitment of controls, in the context of projects such as French Exome and VACARME, it is necessary to assess the impact of a very fine geographical structure (France) for different statistical strategies. The second part of this thesis consists in estimating this impact by simulating fine-scale population structures
Philibert, Aurore. "Méthodes de méta-analyse pour l'estimation des émissions de N2O par les sols agricoles." Phd thesis, AgroParisTech, 2012. http://pastel.archives-ouvertes.fr/pastel-00913760.
Повний текст джерелаLatasa, Itxaro. "Enseignement de la statistique pour des géographes : analyse des problèmes et propositions de solutions : le cas de l'Université espagnole." Aix-Marseille 1, 2005. http://www.theses.fr/2005AIX10052.
Повний текст джерелаVera, Carine. "Modèles linéaires mixtes multiphasiques pour l'analyse de données longitudinales : Application à la croissance des plantes." Montpellier 2, 2004. http://www.theses.fr/2004MON20161.
Повний текст джерелаTurko, Anton. "Approche comportementale des marchés financiers : options et prévisions." Aix-Marseille 3, 2009. http://www.theses.fr/2009AIX32038.
Повний текст джерелаThis paper aims at giving a more comprehensive understanding of the way the prices are set up on option markets. Prediction models are derived. To facilitate their understanding we propose to split price formation process into two phases: -a first one which says how an equilibrium can be reached taking into account the way opposite operators see the future market movements and why some equilibrium can be reached or not. -second, based upon the information revealed by active operators, when enough information is given along time, we show that it is possible give probabilities of market tendencies. This will lead to a behavioral financial model build in the context of utility theory and stochastic dominance, leading to a better understanding of capital markets and option markets. . The purpose of the model will be to permit high probability forecasts in some few specific situations. For this, information coming from buyers as sellers is exploited, each operator bringing his own particular vision of future price market movements
Conreaux, Stéphane. "Modélisation de 3-variétés à base topologique : application à la géologie." Vandoeuvre-les-Nancy, INPL, 2001. http://www.theses.fr/2001INPL032N.
Повний текст джерелаVolumic modeling enables to represent real objects by computer science objects. In geology, a 3D model may be defined by a set of surfacic objects partitionning the 3D space in regions. For instance, these surfaces can be horizons or faults (geological objects). A volumic object composed of 3-cells (tetrahedra or arbitrary polyhedra) could be a second way to represent a 3D model. With this kind of representation, it is possible to attach several properties on the nodes of the mesh. Thanks to a topological kernel based on G-Maps, we will study the following issues : - defining efficient data strcutures enabling the decomposition of objects into discrete elements to be represented, - generating and editing meshes for surfacic and volumic objects (removing cells, splitting cells,. . . ), - using a multi-purpose operation called corefinement. We also present several geological applications using corefinement operation : insertion of a gridded chenal in a regular grid (the intersected cells of the channel and the grid are perfect), boolean operations between geological objects,. .
Agnaou, Youssef Joseph. "Analyse statistique de données de croissance humaine : estimation et ajustement paramétriques, non paramétriques, et par réseaux de neurones." Bordeaux 1, 2001. http://www.theses.fr/2001BOR12404.
Повний текст джерелаOllier, Sébastien. "Des outils pour l'intégration des contraintes spatiales, temporelles et évolutives en analyse des données écologiques." Lyon 1, 2004. http://www.theses.fr/2004LYO10293.
Повний текст джерелаNguegang, Kamwa Blandine. "Stratégie d'échantillonnage des mesures LIBS in situ de la teneur en or dans des échantillons miniers : optimisation par analyse statistique." Master's thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/69119.
Повний текст джерелаIn Quebec, 19 gold mines produce more than C (dollar) 1.8 billion of gold annually. In these mines, hundreds of rock samples are collected daily and sent to the laboratory to determine their gold concentrations. Since laboratory results are only available after 24 to 48 hours, there is a direct negative impact on mining activities. Technological advances in recent years suggest that Laser Induced Breakdown Spectroscopy (LIBS) may be a promising technology for real-time and in-situ measurement of the gold content of rock samples. Considering the size of each shot produced by the laser on a rock sample, namely 500 µm, many shots will be required in order to obtain a representative result of the sample analyzed. For example, for a 50 cm long core sample, and a surface analyzed between 70 and 80%, 10,000 laser shots were fired to ensure to obtain a result representative of the sample, with an acquisition time of half a day in the laboratory, which is a too long period of time for a practical application in mines. For this reason, the objective of this project is to minimize the number of LIBS shots required on a sample to be analyzed, while remaining representative of the latter, and thus obtain a reliable and accurate measurement of the gold content. For this, a descriptive statistical analysis combined with several elaborate patterns is applied to the 10,000 LIBS data obtained. By setting a compromise between the number of shots to be made on a sample and the analysis time, the Loop pattern minimizes the number of shots with an acceptable analysis time. From the latter, a sampling protocol has been developed, where to be representative of core samples, 1500 shots are needed whereas for rock samples, only 100 shots are needed. However, it would be important to be