Tesis sobre el tema "Produit de données"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Produit de données".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Diallo, Thierno M. L. "Approche de diagnostic des défauts d’un produit par intégration des données de traçabilité unitaire produit/process et des connaissances expertes". Thesis, Lyon 1, 2015. http://www.theses.fr/2015LYO10345.
Texto completoThis thesis, which is part of the Traçaverre Project, aims to optimize the recall when the production process is not batch type with a unit traceability of produced items. The objective is to minimize the number of recalled items while ensuring that all items with defect are recalled. We propose an efficient recall procedure that incorporates possibilities offered by the unitary traceability and uses a diagnostic function. For complex industrial systems for which human expertise is not sufficient and for which we do not have a physical model, the unitary traceability provides opportunities to better understand and analyse the manufacturing process by a re-enactment of the life of the product through the traceability data. The integration of product and process unitary traceability data represents a potential source of knowledge to be implemented and operate. This thesis propose a data model for the coupling of these data. This data model is based on two standards, one dedicated to the production and the other dealing with the traceability. We developed a diagnostic function based on data after having identified and integrated the necessary data. The construction of this diagnosis function was performed by a learning approach and comprises the integration of knowledge on the system to reduce the complexity of the learning algorithm. In the proposed recall procedure, when the equipment causing the fault is identified, the health status of this equipment in the neighbourhood of the manufacturing time of the defective product is evaluated in order to identify other products likely to present the same defect. The global proposed approach was applied to two case studies. The first study focuses on the glass industry. The second case of application deals with the benchmark Tennessee Eastman process
Shariat, Ghodous Parisa. "Modélisation intégrée de données de produit et de processus de conception". Lyon 1, 1996. http://www.theses.fr/1996LYO10208.
Texto completoBevilacqua, Elsa. "Etude chimique et minéralogique des peintures : analyse sur poudre par méthodes X, traitement de données". Lille 1, 1989. http://www.theses.fr/1989LIL10155.
Texto completoBenson, Marie Anne. "Pouvoir prédictif des données d'enquête sur la confiance". Master's thesis, Université Laval, 2021. http://hdl.handle.net/20.500.11794/69497.
Texto completoConfidence survey data are time series containting the responses to questions aiming to measure confidence and expectations of economic agents about future economic activity. The richness of these data and their availability in real time attracts the interest of many forecasters who see it as a way to improve their traditional forecasts. In this thesis, I assess the predictive power of survey data for the future evolution of Canadian GDP, while comparing the forecasting performance of the Conference Board of Canada own confidence indices to the indicators I construct using principal component analysis. Using three simple linear models, I carry out an out-of-sample forecasting experiment with rolling windows on the period 1980 to 2019. The results show that principal component analysis provides better-performing indicators than the indices produced by the Conference Board. However, the results of the study cannot show that clear that confidence improves forecasting unambiguently once the lagged growth rate of GDP is added to the analysis.
Herve, Baptiste. "Conception de service dans les entreprises orientées produit sur la base des systèmes de valorisation de données". Thesis, Paris, ENSAM, 2016. http://www.theses.fr/2016ENAM0026/document.
Texto completoIn a more and more numeric oriented industrial landscape, the business opportunities for companies to innovate and answer needs inaccessible yep are increasing. In this framework, the internet of things appears as a high potential technology. This innovation lever, where the value-creation is principally based on the data, is not tangible by nature and this is the reason why we conceder it as a service in this thesis. However, the designer has to face a complex universe where a high number expertise and knowledge are engaged. This is the reason why we propose in this thesis a design methodology model organizing the service, the domain knowledge and the data discovery technologies in an optimized process to design the internet of things. This model has been experienced at e.l.m. leblanc, company of the Bosch group, in the development of a connected boiler and its services
Herve, Baptiste. "Conception de service dans les entreprises orientées produit sur la base des systèmes de valorisation de données". Electronic Thesis or Diss., Paris, ENSAM, 2016. http://www.theses.fr/2016ENAM0026.
Texto completoIn a more and more numeric oriented industrial landscape, the business opportunities for companies to innovate and answer needs inaccessible yep are increasing. In this framework, the internet of things appears as a high potential technology. This innovation lever, where the value-creation is principally based on the data, is not tangible by nature and this is the reason why we conceder it as a service in this thesis. However, the designer has to face a complex universe where a high number expertise and knowledge are engaged. This is the reason why we propose in this thesis a design methodology model organizing the service, the domain knowledge and the data discovery technologies in an optimized process to design the internet of things. This model has been experienced at e.l.m. leblanc, company of the Bosch group, in the development of a connected boiler and its services
Juillard, Hélène. "Méthodes d'estimation et d'estimation de variance pour une enquête longitudinale : application aux données de l'Etude Longitudinale Française depuis l'Enfance (Elfe)". Thesis, Toulouse 1, 2016. http://www.theses.fr/2016TOU10026/document.
Texto completoIn this document, we are interested in estimation under a design-based framework, where the randomness arises from the sample selection. Each sampling leads to a sampling variance. After the survey, the estimation of this variance will serve as a measure of precision (or uncertainty) for the estimators of the parameters under study. The 2011 ELFE cohort comprises more than 18,000 children whose parents consented to their inclusion. In each of the selected maternity units, targeted babies born during four specific periods representing each of the four seasons in 2011 were selected. ELFE is the first longitudinal study of its kind in France, tracking children from birth to adulthood. It will examine every aspect of these children’s lives from the perspectives of health, social sciences and environmental health. The ELFE cohort was selected through a non-standard sampling design that is called cross-classified sampling, with independent selections of the sample of maternity units and of the sample of days. In this work, we propose unbiased variance estimators to handle this type of sampling designs, and we derive specific variance estimators adapted to the ELFE case. Tracking of the babies starts when they are just a few days old and still at the maternity unit. When the children reach the age of two months, the parents are contacted for the first telephone interview. When the children are one year old, and again when they reach the ages of two, three and a half years and five and a half years, their parents will once more be contacted by telephone. The survey is longitudinal.The first chapter of this thesis introduces concepts related to the theory of survey design and presents the survey ELFE (French Longitudinal Study from Childhood); its data will be used as illustration for the theoretical results derived in this thesis. The second chapter focuses on the cross-classified design and provides unbiased estimators and simplified variance estimators to treat this design in a general theoretical framework. It is also shown that this design is generally less efficient than the conventional two-stage sampling design. Chapter three is in continuity with the previous one : for the cross-classified sampling design, five unbiased Yates-Grundy like variance estimators are available from five different possible decomposition of the variance. Chapter four is an article allowing the reader to make the difference between the cross-classified sampling design and the two-stage sampling design, and to implement the steps of sampling and estimation under the softwares R, SAS and Stata. Chapter five is devoted to variance computation and variance estimation for a cohort survey with monotone non-response. Chapter six is a methodological report to users in which the appropriate variance estimation for the ELFE design is explained and implemented with softwares R, SAS and Stata. All the results of simulation studies presented in this document are reproducible, the codes being proposed in the annex
Kubler, Sylvain. "Premiers travaux relatifs au concept de matière communicante : Processus de dissémination des informations relatives au produit". Phd thesis, Université Henri Poincaré - Nancy I, 2012. http://tel.archives-ouvertes.fr/tel-00759600.
Texto completoTursi, Angela. "Ontology-based approach for product-driven interoperability of enterprise production systems". Thesis, Nancy 1, 2009. http://www.theses.fr/2009NAN10086/document.
Texto completoIn recent years, the enterprise applications interoperability has become the leitmotiv of developers and designers in systems engineering. Most approaches to interoperability in the company have the primary objective of adjustment and adaptation of types and data structures necessary for the implementation of collaboration between companies. In the field of manufacturing, the product is a central component. Scientific works propose solutions taking into account information systems derived from products technical data throughout their life cycle. But this information is often uncorrelated. The management of product data (PDM) is commonly implemented to manage all information concerning products throughout their life cycle. The modelling of management and manufacturing processes is widely applied to both physical products and services. However, these models are generally independent “islands” ignoring the problem of interoperability between applications that support these models. The objective of this thesis is to study the problem of interoperability applied to applications used in the manufacturing environment and to define a model of the ontological knowledge of enterprises related to the products they manufacture, based on technical data, ensuring the interoperability of enterprise systems. The outcome of this research concerns the formalization of a methodology for identifying a product-centric information system in the form of an ontology, for the interoperability of applications in manufacturing companies, based on existing standard such as ISO 10303 and IEC 62264
Kubler, Sylvain. "Premiers travaux relatifs au concept de matière communicante : Processus de dissémination des informations relatives au produit". Electronic Thesis or Diss., Université de Lorraine, 2012. http://www.theses.fr/2012LORR0130.
Texto completoOver the last decade, communities involved with intelligent-manufacturing systems (IMS - Intelligent Manufacturing Systems, HMS - Holonic Manufacturing System) have demonstrated that systems that integrate intelligent products can be more efficient, flexible and adaptable. Intelligent products may prove to be beneficial economically, to deal with product traceability and information sharing along the product lifecycle. Nevertheless, there are still some open questions such as the specification of what information should be gathered, stored and distributed and how it should be managed during the lifecycle of the product. The contribution of this thesis is to define a process for disseminating information related to the product over its lifecycle. This process is combined with a new paradigm, which changes drastically the way we view the material. This concept aims to give the ability for the material to be intrinsically and wholly "communicating". The data dissemination process allow users to store context-sensitive information on communicating product. In addition to the data dissemination process, this thesis gives insight into the technological and scientific research fields inherent to the concept of "communicating material", which remain to be explored
Minel, Stépahnie. "Démarche de conception collaborative et proposition d'outils de transfert de données métier : application à un produit mécanique "le siège d'automobile"". Paris, ENSAM, 2003. http://www.theses.fr/2003ENAM0030.
Texto completoAhmed-Nacer, Mohamed. "Un modèle de gestion et d'évolution de schéma pour les bases de données de génie logiciel". Grenoble INPG, 1994. http://www.theses.fr/1994INPG0067.
Texto completoNous faisons d'abord le point des travaux concernant l'évolution de schémas et l’évolution des modèles de procédés logiciels ; nous définissons des critères d'évolution et nous montrons que les principales approches ne satisfont pas les besoins en génie logiciel
Nous présentons ensuite notre modèle : celui-ci permet l'existence simultanée de plusieurs points de vues de la base d'objets, la composition de schémas et enfin, l'expression de politiques différentes d'évolution de ces schémas ; chaque application pouvant définir la politique d’évolution souhaitée<
La gestion des points de vue se fonde sur le versionnement de la métabase. Le maintien de la cohérence de la base d'objets et du système global de gestion et d'évolution de schémas est assuré par l'expression de contraintes au niveau de cette métabase. La composition des schémas utilise une technique de configurations de logiciels appliqués aux types et la définition de politique d’évolution utilise les capacités de la base active du système Adèle"
Mony, Charles. "Un modèle d'intégration des fonctions conception-fabrication dans l'ingénierie de produit : définition d'un système mécanique en base de données objet". Châtenay-Malabry, Ecole centrale de Paris, 1992. http://www.theses.fr/1992ECAP0232.
Texto completoBriard, Tristan. "Des données captées aux créations de valeurs : Proposition d’une méthode outillée pour structurer la conception amont des produits intelligents et connectés". Electronic Thesis or Diss., Paris, HESAM, 2023. http://www.theses.fr/2023HESAE095.
Texto completoRecent advances in information and communication technologies and data science have led to the development of smart connected products. These products have new capabilities that create value for both the user and the manufacturer. These developments are leading to a major paradigm shift. To take full advantage of the potential of smart connected products, manufacturers need to adopt new processes, especially in the design phase. Indeed, the choice of data that can be captured by the product at the design stage defines the potential value creation for the rest of its life cycle. The aim of this thesis is therefore to formalise a methodological structure to guide designers in the integration of value creation based on captured data. In order to propose a relevant and effective method, we first explore the challenges related to design and captured data. Based on the challenges identified, a method is then constructed. It is structured in two phases, each supported by a dedicated tool. The first phase is a creativity phase that systematically generates potential value creation based on captured data. The second phase is a decision-making phase in which the previously generated value creations are systematically ranked according to sustainability criteria. Experiments have validated the relevance and effectiveness of the proposed methodology. Through the method and its tools, this thesis work contributes to scientific and industrial research supporting the paradigm shift brought by digital technologies in product design
Chambolle, Frédéric. "Un modèle produit piloté par les processus d'élaboration : application au secteur automobile dans l'environnement STEP". Châtenay-Malabry, Ecole centrale de Paris, 1999. http://www.theses.fr/1999ECAP0623.
Texto completoLefebvre, Valérie. "Risque chimique dans les laboratoires de biologie moléculaire : de l'approche théorique aux données de l'observation". Bordeaux 2, 1999. http://www.theses.fr/1999BOR23070.
Texto completoCorbière, François de. "L'amélioration de la qualité des données par l'électronisation des échanges à l'épreuve des fiches produit dans le secteur de la grande distribution". Nantes, 2008. http://www.theses.fr/2008NANT4020.
Texto completoOur research question is concerned with the influence of electronic exchanges organizations on data quality improvement. Product information is a set of data that identify and describe a product of a manufacturer. Electronic exchanges organizations deal with the sending information systems, the receiving information systems and their interconnection. A case study based qualitative research is conducted to understand how electronic exchanges organizations are perceived to participate to data quality improvement from manufacturers’ and retailers’ points of view. Our results show that sending, receiving and interconnected architectures, exchanges automation and exchanges standardization all influence the perceived improvement of some data quality dimensions. In a processing view of exchanges, our main theoretical contribution is to show that this set of factors can all be conceptualized with interdependence. We define interdependence through three levels: technical, informational and organizational. In each of these levels, we propose that interdependence types can be positioned between two extremes that are dyadic interdependence and sector interdependence. Dyadic interdependence refers to multiple sequential interdependencies between two firms. Sector interdependence refers to a pool interdependency between all the firms
Diop, Mamadou. "Décomposition booléenne des tableaux multi-dimensionnels de données binaires : une approche par modèle de mélange post non-linéaire". Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0222/document.
Texto completoThis work is dedicated to the study of boolean decompositions of binary multidimensional arrays using a post nonlinear mixture model. In the first part, we introduce a new approach for the boolean factorization of binary matrices (BFBM) based on a post nonlinear mixture model. Unlike the existing binary matrix factorization methods, the proposed method is equivalent to the boolean factorization model when the matrices are strictly binary and give thus more interpretable results in the case of correlated sources and lower rank matrix approximations compared to other state-of-the-art algorithms. A necessary and suffi-cient condition for the uniqueness of the BFBM is also provided. Two algorithms based on multiplicative update rules are proposed and tested in numerical simulations, as well as on a real dataset. The gener-alization of this approach to the case of binary multidimensional arrays (tensors) leads to the boolean factorisation of binary tensors (BFBT). The proof of the necessary and sufficient condition for the boolean decomposition of binary tensors is based on a notion of boolean independence of binary vectors. The multiplicative algorithm based on the post nonlinear mixture model is extended to the multidimensional case. We also propose a new algorithm based on an AO-ADMM (Alternating Optimization-ADMM) strategy. These algorithms are compared to state-of-the-art algorithms on simulated and on real data
El, Amraoui Yassine. "Faciliter l'inclusion humaine dans le processus de science des données : de la capture des exigences métier à la conception d'un workflow d'apprentissage automatique opérationnel". Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4017.
Texto completoWhen data scientists need to create machine learning workflows to solve a problem, they first understand the business needs, analyze the data, and then experiment to find a solution. They judge the success of each attempt using metrics like accuracy, recall, and F-score. If these metrics meet expectations on the test data, it's a success; otherwise, it's considered a failure. However, they often don't pinpoint why a workflow fails before trying a new one. This trial-and-error process can involve many attempts because it's not guided and relies on the preferences and knowledge of the data scientist.This intuitive method leads to varying trial counts among data scientists. Also, evaluating solutions on a test set doesn't guarantee performance on real-world data. So, when models are deployed, additional monitoring is needed. If a workflow performs poorly, the whole process might need restarting with adjustments based on new data.Furthermore, each data scientist learns from their own experiences without sharing knowledge. This lack of collaboration can lead to repeated mistakes and oversights. Additionally, the interpretation of similarity between use cases can vary among practitioners, making the process even more subjective. Overall, the process lacks structure and heavily depends on the individual knowledge and decisions of the data scientists involved.In this work, we present how to mutualize data science knowledge related to anomaly detection in time series to help data scientists generate machine learning workflows by guiding them along the phases of the process .To this aim, we have proposed three main contributions to this problem:Contribution 1: Integrating Data, Business Requirements, and Solution Components in ML Workflow design.While automatic approaches focus on data, our approach considers the dependencies between the data, the business requirements, and the solution components. This holistic approach ensures a more comprehensive understanding of the problem and guides the development of appropriate solutions.Contribution 2: Customizing Workflows for Tailored Solutions by Leveraging Partial and Modular Configurations. Our approach aims to assist data scientists in customizing workflows for their specific problems. We achieve this by employing various variability models and a constraint system. This setup enables users to receive feedback based on their data and business requirements, possibly only partially identified.Additionally, we showed that users can access previous experiments based on problem settings or create entirely new ones.Contribution 3: Enhancing Software Product Lines Knowledge through New Product Exploitation.We have proposed a practice-driven approach to building an SPL as a first step toward allowing the design of generic solutions to detect anomalies in time series while capturing new knowledge and capitalizing on the existing one when dealing with new experiments or use cases.The incrementality in the acquisition of knowledge and the instability of the domain are supported by the SPL through its structuring and the exploitation of partial configurations associated with past use cases.As far as we know, this is the first case of application of the SPL paradigm in such a context and with a knowledge acquisition objective.By capturing practices in partial descriptions of the problems and descriptions of the solutions implemented, we obtain the abstractions to reason about datasets, solutions, and business requirements.The SPL is then used to produce new solutions, compare them to past solutions, and identify knowledge that was not explicit.The growing abstraction supported by the SPL also brings other benefits.In knowledge sharing, we have observed a shift in the approach to creating ML workflows, focusing on analyzing problems before looking for similar applications
Moalla, Néjib. "Amélioration de la qualité des données du produit dans le contexte du cycle de vie d’un vaccin : une approche d’interopérabilité dirigée par les modèles". Lyon 2, 2007. http://theses.univ-lyon2.fr/sdx/theses/lyon2/2007/moalla_n.
Texto completoTo reach the industrial excellence, data quality is one of the essential pillars to handle in any improvement or optimization approach. Thus, data quality is a paramount need to ensure that the product meets the customer requirements. In the drug company and more particularly, in the vaccine industry, the definition of vaccine product is very complex considering its molecular structure. Data quality proves to be a priority according to many product definitions (biological, pharmaceutical, industrial, etc) and especially face to a lot of restrictions and regulatory recommendations imposed by customers as health authorities. In this context, and in front of the multitude of business activities supported by disconnected information systems, the need to ensure interoperability between these heterogeneous systems will make it possible to handle the specifications of various business scope during the exchanges of information. The deployment of model driven architecture will enable to transform a functional description of processes towards data models expressed in various platforms. In the logistic perimeter of the vaccines industry, we are interested to ensure the quality of some critical data in our ERP by the deployment of the concepts of model driven interoperability. The definition of various levels of reference frames will enable us to structure the models thus generated to share them with logistic perimeter actors. In the long run, our approach aims at reducing the cost of the product
Durupt, Alexandre. "Définition d'un processus de rétro-conception de produit par intégration des connaissances de son style de vie". Troyes, 2010. http://www.theses.fr/2010TROY0009.
Texto completoThis thesis concerns the reverse engineering (RE) of a mechanical object. This activity consists in generating a CAD model of this object from the 3D points cloud from its digitalisation. The state of the art on the geometrical recognition in a point cloud suggests approaches which allow obtaining an almost unusable CAD model (geometrical parameters with not design intents) for a redesign activity. This thesis defines a methodology of RE that provides a parameterised CAD model that includes design intents such as the manufacturing and the functional aspects. So, an activity of redesign can be accelerated because the design intents will have been brought to light. In design product domain, solutions like Knowledge Based Engineering ensure the de-sign intents management. This thesis suggests adapting these solutions to RE. A method called Knowledge Based Reverse Engineering was created. It allows analysing the mechanical object according to the design intents. These design intents are embodied by geometrical features. Their parameters are extracted from the 3D points cloud. A CAD model including design intents (manufacturing, functional requirements) can be created. This work is illustrated by industrial examples and implemented in a viewer called KBRE sys-tem
Roumiguie, Antoine. "Développement et validation d’un indice de production des prairies basé sur l’utilisation de séries temporelles de données satellitaires : application à un produit d’assurance en France". Thesis, Toulouse, INPT, 2016. http://www.theses.fr/2016INPT0030/document.
Texto completoAn index-based insurance is provided in response to the increasing number of droughts impacting grasslands. It is based on a forage production index (FPI) retrieved from medium resolution remote sensing images to estimate the impact of hazard in a specific geographical area. The main issue related to the development of such an insurance is to obtain an accurate estimation of losses. This study focuses on two objectives: the FPI validation and the improvement of this index. A validation protocol is defined to limit problems attached to the use of medium resolution products and scaling issues in the comparisons process. FPI is validated with different data: ground measurements production (R² = 0.81; R² = 0.71), high resolution remote sensing images (R² = 0.78 - 0.84) and modelled data (R² = 0.68). This study also points out areas of improvement for the IPF chain. A new index, based on semi-empirical modeling combining remote sensing data with exogenous data referring to climatic conditions and grassland phenology, allows improving production estimation accuracy by 18.6%. Results of this study open several new research perspectives on FPI development and its potential practical application
Lambert, Thomas. "On the Effect of Replication of Input Files on the Efficiency and the Robustness of a Set of Computations". Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0656/document.
Texto completoThe increasing importance of High Performance Computing (HPC) and Big Data applications creates new issues in parallel computing. One of them is communication, the data transferred from a processor to another. Such data movements have an impact on computational time, inducing delays and increase of energy consumption. If replication, of either tasks or files, generates communication, it is also an important tool to improve resiliency and parallelism. In this thesis, we focus on the impact of the replication of input files on the overall amount of communication. For this purpose, we concentrate on two practical problems. The first one is parallel matrix multiplication. In this problem, the goal is to induce as few replications as possible in order to decrease the amount of communication. The second problem is the scheduling of the “Map” phase in the MapReduce framework. In this case, replication is an input of the problem and this time the goal is to use it in the best possible way. In addition to the replication issue, this thesis also considers the comparison between static and dynamic approaches for scheduling. For consistency, static approaches compute schedules before starting the computation while dynamic approaches compute the schedules during the computation itself. In this thesis we design hybrid strategies in order to take advantage of the pros of both. First, we relate communication-avoiding matrix multiplication with a square partitioning problem, where load-balancing is given as an input. In this problem, the goal is to split a square into zones (whose areas depend on the relative speed of resources) while minimizing the sum of their half-perimeters. We improve the existing results in the literature for this problem with two additional approximation algorithms. In addition we also propose an alternative model using a cube partitioning problem. We prove the NP-completeness of the associated decision problem and we design two approximations algorithms. Finally, we implement the algorithms for both problems in order to provide a comparison of the schedules for matrix multiplication. For this purpose, we rely on the StarPU library. Second, in the Map phase of MapReduce scheduling case, the input files are replicated and distributed among the processors. For this problem we propose two metrics. In the first one, we forbid non-local tasks (a task that is processed on a processor that does not own its input files) and under this constraint, we aim at minimizing the makespan. In the second problem, we allow non-local tasks and we aim at minimizing them while minimizing makespan. For the theoretical study, we focus on tasks with homogeneous computation times. First, we relate a greedy algorithm on the makespan metric with a “ball-into-bins” process, proving that this algorithm produces solutions with expected overhead (the difference between the number of tasks on the most loaded processor and the number of tasks in a perfect distribution) equal to O(mlogm) where m denotes the number of processors. Second, we relate this scheduling problem (with forbidden non-local tasks) to a problem of graph orientation and therefore prove, with the results from the literature, that there exists, with high probability, a near-perfect assignment (whose overhead is at most 1). In addition, there are polynomial-time optimal algorithms. For the communication metric case, we provide new algorithms based on a graph model close to matching problems in bipartite graphs. We prove that these algorithms are optimal for both communication and makespan metrics. Finally, we provide simulations based on traces from a MapReduce cluster to test our strategies with realistic settings and prove that the algorithms we propose perform very well in the case of low or medium variance of the computation times of the different tasks of a job
Diop, Mamadou. "Décomposition booléenne des tableaux multi-dimensionnels de données binaires : une approche par modèle de mélange post non-linéaire". Electronic Thesis or Diss., Université de Lorraine, 2018. http://www.theses.fr/2018LORR0222.
Texto completoThis work is dedicated to the study of boolean decompositions of binary multidimensional arrays using a post nonlinear mixture model. In the first part, we introduce a new approach for the boolean factorization of binary matrices (BFBM) based on a post nonlinear mixture model. Unlike the existing binary matrix factorization methods, the proposed method is equivalent to the boolean factorization model when the matrices are strictly binary and give thus more interpretable results in the case of correlated sources and lower rank matrix approximations compared to other state-of-the-art algorithms. A necessary and suffi-cient condition for the uniqueness of the BFBM is also provided. Two algorithms based on multiplicative update rules are proposed and tested in numerical simulations, as well as on a real dataset. The gener-alization of this approach to the case of binary multidimensional arrays (tensors) leads to the boolean factorisation of binary tensors (BFBT). The proof of the necessary and sufficient condition for the boolean decomposition of binary tensors is based on a notion of boolean independence of binary vectors. The multiplicative algorithm based on the post nonlinear mixture model is extended to the multidimensional case. We also propose a new algorithm based on an AO-ADMM (Alternating Optimization-ADMM) strategy. These algorithms are compared to state-of-the-art algorithms on simulated and on real data
Melhem, Mariam. "Développement des méthodes génériques d'analyses multi-variées pour la surveillance de la qualité du produit". Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0543.
Texto completoThe microelectronics industry is a highly competitive field, constantly confronted with several challenges. To evaluate the manufacturing steps, quality tests are applied during and at the end of production. As these tests are discontinuous, a defect or failure of the equipment can cause a deterioration in the product quality and a loss in the manufacturing Yield. Alarms are setting off to indicate problems, but periodic alarms can be triggered resulting in alarm flows. On the other hand, a large quantity of data of the equipment obtained from sensors is available. Alarm management, interpolation of quality measurements and reduction of correlated equipment data are required. We aim in our work to develop generic methods of multi-variate analysis allowing to aggregate all the available information (equipment health indicators, alarms) to predict the product quality taking into account the quality of the various manufacturing steps. Based on the pattern recognition principle, data of the degradation trajectory are compared with health indices for failing equipment. The objective is to predict the remaining number of products before loss of the performance related to customer specifications, and the isolation of equipment responsible for degradation. In addition, regression- ased methods are used to predict the product quality while taking into account the existing correlation and the dependency relationships in the process. A model for the alarm management is constructed where criticality and similarity indices are proposed. Then, alarm data are used to predict the product scrap. An application to industrial data from STMicroelectronics is provided
Calero, Pastor Maria. "Méthode simplifiée d'évaluation de la performance énergétique utilisable en conception et alimentée par des données issues de politiques publiques de produit : application aux systèmes de chauffage de bâtiments". Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAI015/document.
Texto completoEnvironmental performances of products largely influence performances of systems. Moreover, systems have still an untapped energy-saving potential concerning environmental performances at system level rather than at the level of the individual products of which they are composed.The objective of this work is to propose an approach to deal with energy performance assessments at system level considering information/data from European product policies (Ecodesign, Energy Labels, Green Public Procurement and EU Ecolabel). The hypothesis here is that environmental product policies, that have been very useful in facilitating a homogeneous rating scheme in the EU market for individual products, can also be advantageously used in a method to assess the energy performance of systems.This research work proposes a simplified method for supporting the design of good performing heating systems using data from EU product policies, which is available during the design stage. Firstly, a system modelling with a top-down approach is used so that system aspects (geographical conditions, building characteristics, etc.) are regarded. Secondly, the system energy performance is calculated from a bottom-up approach so that, from the performance of the products and sub-systems composing the system. The method has 5 steps divided in two main phases: diagnostic of the initial system and improvement. The method is supported by an original calculation tool which determines the energy parameters (energy demand, energy losses, energy consumption and low-emission energy efficiency) at system level using performance figures from EU product policies. It helps assessing how good a heating system is by setting worst, benchmark and best possible systems. The method is flexible, and allows different product configurations to be assessed and can hence support the design activities of heating systems.The method is tested on a real case study, the re-design of existing heating systems of a dwelling in north Italy, including a solar hot water system and a space heating system. The case study demonstrates the potential of improvement of the heating systems based on the results produced by the method, by helping selecting products currently available in the market. In addition, based on the assessment, several improved design alternatives can be proposed combining different performances of the products which compose the heating systems.The dissertation also analyses the evolution of the different approaches of EU product policies (product, extended product and system). In particular, the package concept set in the energy labelling regulations of heating systems is studied in detail. The package label of Regulation 811/2013 is implemented on the same prior case study so that results can be compared with the ones of previous sections. It is shown that the package concept can also support decisions made in the building design phase especially in the choice of appropriate components based on estimation of system performances. In addition, the peer-reviewed paper analyses the link of building-related product policies with the Energy Performance of Buildings Directive, and it is concluded that they should be somehow better aligned
Gorand, Olivier. "Création d'une base de données informatique de toxicologie industrielle dans la centrale nucléaire du Blayais". Bordeaux 2, 1998. http://www.theses.fr/1998BOR23069.
Texto completoPinte, Sébastien. "Identification de la séquence de fixation à l'ADN et de recherche de gènes cibles du produit du gène suppresseur de tumeurs HIC1". Lille 2, 2004. http://www.theses.fr/2004LIL2S015.
Texto completoMaleki, Elaheh. "A Systems Engineering-based semantic model to support “Product-Service System” life cycle". Thesis, Ecole centrale de Nantes, 2018. http://www.theses.fr/2018ECDN0064/document.
Texto completoProduct-service systems (PSS) result from the integration of heterogeneous components covering both tangible and intangible aspects(mechanical, electrical, software, process, organization, etc.). The process of developing PSS is highly collaborative involving a wide variety of stakeholders. This interdisciplinary nature requires standardized semantic repositories to handle the multitude of business views and facilitate the integration of all heterogeneous components into a single system. This is even more complex in the case of customizable PSS in the industrial sector. Despite the many methodologies in literature, the management of the development processes of the PSS is still limited to face this complexity. In this context, Systems Engineering (SE) could bean advantageous solution in terms of its proven qualities for the modeling and management of complex systems. This thesis aims at exploring the potentials of Systems Engineering (SE) as a conceptual foundation to represent various different business perspectives associated with the life cycle of the PSS. In this context, a meta-model for PSS is proposed and verified in industrial cases. An ontological model is also presented as an application of a part of the model to structure the common repository of the ICP4Life platform
Assouroko, Ibrahim. "Gestion de données et dynamiques des connaissances en ingénierie numérique : contribution à l'intégration de l'ingénierie des exigences, de la conception mécanique et de la simulation numérique". Compiègne, 2012. http://www.theses.fr/2012COMP2030.
Texto completoOver the last twenty years, the deep changes noticed in the field of product development, led to methodological change in the field of design. These changes have, in fact, benefited from the significant development of Information and Communication Technologies (ICT) (such as PLM systems dedicated to the product lifecycle management), and from collaborative engineering approaches, playing key role in the improvement of product development process (PDP). In the current PLM market, PLM solutions from different vendors still present strong heterogeneities, and remain on proprietary technologies and formats for competitiveness and profitability reasons, what does not ease communication and sharing between various ICTs contributing to the PDP. Our research work focuses on PDP, and aims to contribute to the improvement of the integrated management of mechanical design and numerical simulation data in a PLM context. The research contribution proposes an engineering knowledge capitalization solution based on a product semantic relationship management approach, organized as follows : (1) a data structuring approach driven by so called semi-structured entities with a structure able to evolve along the PDP, (2) a conceptual model describing the fundamental concepts of the proposed approach, (3) a methodology that facilitates and improves the management and reuse of engineering knowledge within design project, and (4) a knowledge capitalization approach based on the management of semantic relationships that exist or may exist between engineering entities within the product development process
Mokhtarian, Hossein. "Modélisation intégrée produit-process à l'aide d'une approche de métamodélisation reposant sur une représentation sous forme de graphes : Application à la fabrication additive". Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAI013/document.
Texto completoAdditive manufacturing (AM) has created a paradigm shift in product design and manufacturing sector due to its unique capabilities. However, the integration of AM technologies in the mainstream production faces the challenge of ensuring reliable production and repeatable quality of parts. Toward this end, Modeling and simulation play a significant role to enhance the understanding of the complex multi-physics nature of AM processes. In addition, a central issue in modeling AM technologies is the integration of different models and concurrent consideration of the AM process and the part to be manufactured. Hence, the ultimate goal of this research is to present and apply a modeling approach to develop integrated modeling in additive manufacturing. Accordingly, the thesis oversees the product development process and presents the Dimensional Analysis Conceptual Modeling (DACM) Framework to model the product and manufacturing processes at the design stages of product development process. The Framework aims at providing simulation capabilities and systematic search for weaknesses and contradictions to the models for the early evaluation of solution variants. The developed methodology is applied in multiple case studies to present models integrating AM processes and the parts to be manufactured. This thesis results show that the proposed modeling framework is not only able to model the product and manufacturing process but also provide the capability to concurrently model product and manufacturing process, and also integrate existing theoretical and experimental models. The DACM framework contributes to the design for additive manufacturing and helps the designer to anticipate limitations of the AM process and part design earlier in the design stage. In particular, it enables the designer to make informed decisions on potential design alterations and AM machine redesign, and optimized part design or process parameter settings. DACM Framework shows potentials to be used as a metamodeling approach for additive manufacturing
Cortés, Morales Diego. "Large-scale Vertical Velocities in the Global Open Ocean via Linear Vorticity Balance". Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS061.
Texto completoAt oceanic basin scales, vertical velocities are several orders of magnitude smaller than their horizontal counterparts, rendering a formidable challenge for their direct measurement in the real ocean. Therefore, their estimations need a combination of observation-based datasets and theoretical considerations.Historically, scientists have employed various techniques to estimate vertical velocities across different scales constrained by the available observations of their time. Various approaches have been attempted, ranging from methods utilizing in situ horizontal current divergence to those based on intricate omega-type equations. However, the Sverdrup balance has captured the attention of researchers and ours due to its robust and straightforward description of ocean dynamics. One of the fundamental components of the Sverdrup balance is the linear vorticity balance (LVB: βv = f ∂z w). It introduces a novel vertical dimension to the conventional Sverdrup balance, establishing a connection between vertical movement and the meridional transport above it.In order to advance on the theoretical prospect of estimating the vertical velocities, it is primarily identified the annual and interannual timescales patterns governing the linear vorticity balance within an eddy-permitting OGCM simulation. Initially, this analysis is conducted over the North Atlantic Ocean, and subsequently expanded to encompass the entire global ocean, focusing on larger scales than 5 degrees. The analysis revealed the feasibility of computing a robust vertical velocity field beneath the mixed layer using the LVB approach across large fractions of the water column in the interior regions of tropical and subtropical gyres and within some layers of the subpolar and austral circulation. Departures from the LVB occur in the western boundary currents, strong zonal tropical flows, subpolar gyres and smaller scales due to the nonlinearities, mixing and bathymetry-driven contributions to the vorticity budget.The extensive validity of the LVB description of the global ocean provides a relatively simple foundation for estimating the vertical velocities through the indefinite depth-integrated LVB. Using an OGCM, it has demonstrated that the estimates possess the capability to accurately reproduce the time-mean amplitude and interannual variability of the vertical velocity field within substantial portions of the global ocean when compared to the reference model. Here, we build the DIOLIVE (indefinite Depth-Integrated Observation-based LInear Vorticity Estimates) product by applying the observation-based geostrophic velocities from ARMOR3D into the indefinite depth-integrated LVB formalism, with wind stress data from ERA5 serving as boundary condition at the surface. This product contains vertical velocities spanning the global ocean's thermocline at 5 degrees horizontal resolution and 40 isopycnal levels during the 1993-2018 period.A comparative analysis between the DIOLIVE product and four alternative products, including one OGCM simulation, two reanalyses and an observation-based reconstruction based on the omega equation, is conducted using various metrics assessing the vertical circulation's multidimensional features of the ocean vertical flow. The omega equation-based product displays large departures from the synchronicity and baroclinicity reproduced by the validation ensemble. However, in regions where the LVB holds as a valid assumption, the DIOLIVE product demonstrates a remarkable ability to replicate the baroclinic structure of the ocean, exhibiting satisfactory spatial consistency and notable agreement in terms of temporal variability when compared to the two reanalyses and the OGCM simulation
Huynh, Ngoc Tho. "A development process for building adaptative software architectures". Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0026/document.
Texto completoAdaptive software is a class of software which is able to modify its own internal structure and hence its behavior at runtime in response to changes in its operating environment. Adaptive software development has been an emerging research area of software engineering in the last decade. Many existing approaches use techniques issued from software product lines (SPLs) to develop adaptive software architectures. They propose tools, frameworks or languages to build adaptive software architectures but do not guide developers on the process of using them. Moreover, they suppose that all elements in the SPL specified are available in the architecture for adaptation. Therefore, the adaptive software architecture may embed unnecessary elements (components that will never be used) thus limiting the possible deployment targets. On the other hand, the components replacement at runtime remains a complex task since it must ensure the validity of the new version, in addition to preserving the correct completion of ongoing activities. To cope with these issues, this thesis proposes an adaptive software development process where tasks, roles, and associate artifacts are explicit. The process aims at specifying the necessary information for building adaptive software architectures. The result of such process is an adaptive software architecture that only contains necessary elements for adaptation. On the other hand, an adaptation mechanism is proposed based on transactions management for ensuring consistent dynamic adaptation. Such adaptation must guarantee the system state and ensure the correct completion of ongoing transactions. In particular, transactional dependencies are specified at design time in the variability model. Then, based on such dependencies, components in the architecture include the necessary mechanisms to manage transactions at runtime consistently
Cheballah, Kamal. "Aides à la gestion des données techniques des produits industriels". Ecully, Ecole centrale de Lyon, 1992. http://www.theses.fr/1992ECDL0003.
Texto completoCarbonneaux, Yves. "Conception et réalisation d'un environnement informatique sur la manipulation directe d'objets mathématiques, l'exemple de Cabri-graphes". Phd thesis, Université Joseph Fourier (Grenoble), 1998. http://tel.archives-ouvertes.fr/tel-00004882.
Texto completoAit, el mahjoub Youssef. "Performance evaluation of green IT networks". Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG011.
Texto completoEnergy saving in telecommunication networks is a major objective to reduce overall consumption. The IT sector already has a very high contribution to this increase. Indeed, many techniques to reduce consumption in other industries or services results in more IT and telecommunications (the "Green by IT" approach) and therefore in an increase of consumption in IT domains. It is therefore important from an economical point of view to reduce the energy consumption per transmitted or calculated bit (the "Green IT" concept). In the networks domain, energy optimization is mainly based on an adaptation of the architecture and the resources employed according to the traffic flows to be transported and the promised quality of service. We therefore seek to adapt resources to demand, which results in a dynamic dimensioning adapted to the load. This is by nature different from the worst-case dimensioning commonly used. In terms of technology, this requires network equipment to have "sleep", "deep sleep" or "hibernate" modes (terminology varies among suppliers), but all of these modes are associated with the same concept: putting the equipment to sleep to reduce its energy consumption. For the performance/energy trade-off to be relevant, it seems important to use energy consumption formulas obtained from the network resource utilization. The approaches we propose are based on the theory of queuing networks, Markov chain analysis (analytically by proposing new product forms and numerically by suggesting new resolution algorithms) and the theory of stochastic comparison.At the application level we have addressed various issues: DVFS mechanisms with a change of processors speed, task migration between physical servers in a data center (load balancing, consolidation), optical networks with efficient filling of optical containers, intermittent power distribution in a sensor network (and LoRa network) including a new model of Energy Packet Networks (EPNs)
Pancrace, Claire. "Nouvelles perpectives sur les produits naturels de cyanobactéries d'eau douce et leurs clusters de gènes, apportées par l'intégration de données à haut débit". Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066632.
Texto completoMicrocystis and Planktothrix are cyanobacterial genus commonly proliferating in freshwater ecosystems. These blooms are associated with human and animal health threat because of synthesis of natural products and cyanotoxins. These compounds are of great chemical diversity and of interest for biotechnological and pharmaceutical applications. We revisited the natural products potential of Microcystis and Planktothrix. Combining molecular biology, genomics and transcriptomics investigations, we characterized natural products gene clusters. We studied their distribution, evolution and transcription as well. This work uncovered new distribution pattern, evolutionary events and unexpected expression patterns. These insights will allow new investigations and applications for cyanobacterial natural products
El, Khalkhali Imad. "Système intégré pour la modélisation, l'échange et le partage des données de produits". Lyon, INSA, 2002. http://theses.insa-lyon.fr/publication/2002ISAL0052/these.pdf.
Texto completoIn Virtual Enterprise and Concurrent Engineering environments, a wide variety of information is used. A crucial issue is the data communication and exchange between heterogeneous systems and distant sites. To solve this problem, the STEP project was introduced. The STandard for the Exchange of Product model data STEP is an evolving international standard for the representation and exchange of product data. The objective of STEP is to provide the unambiguous computer-interpretable representation of product data in all phases of the product’s lifecycle. In a collaborative product development different types of experts in different disciplines are concerned by the product (Design, Manufacturing, Marketing, Customers,. . . ). Each of these experts has his own viewpoint about the same product. STEP Models are unable to represent the expert’s viewpoints. The objective of our research work is to propose a methodology for representation and integration of different expert’s viewpoints in design and manufacturing phases. An Information Infrastructure for modelling, exchanging and sharing product data models is also proposed
Jansen, van rensburg Bianca. "Sécurisation des données 3D par insertion de données cachées et par chiffrement pour l'industrie de la mode". Electronic Thesis or Diss., Université de Montpellier (2022-....), 2023. http://www.theses.fr/2023UMONS044.
Texto completoOver the last few decades, 3D objects have become an essential part of everyday life, in both private and professional contexts. These 3D objects are often stored on the cloud and transferred over networks many times during their existence, where they are susceptible to malicious attacks. Therefore, 3D object security, such as encryption or data hiding, is essential. Encryption is used to protect the visual confidentiality of the 3D object's content. Selective encryption schemes can also be used, where part of a component, such as a part of each vertex, is encrypted. Data hiding is generally used to protect the copyright or the authenticity of the 3D object. However, when a 3D object is encrypted, a third party such as a server may need to embed data in the confidential 3D object. In this case, data hiding in the encrypted domain is performed. In many applications, 3D objects often consist of millions of vertices, and so storing and sharing them online is expensive, time consuming and not environmentally friendly. Consequently, 3D object compression is essential. In this work, we present three contributions in different research areas. First, we present our work on a new method to obtain a watermarked 3D object from high-capacity data hiding in the encrypted domain. Based on the homomorphic properties of the Paillier cryptosystem, our proposed method allows us to embed several secret messages in the encrypted domain with a high-capacity. These messages can be extracted in the plaintext domain after the 3D object decryption. To the best of our knowledge, we are the first to propose a data hiding method in the encrypted domain where the high-capacity watermark is conserved in the plaintext domain after the 3D object is decrypted. The encryption and the data hiding in the encrypted domain are format compliant and without size expansion, despite the use of the Paillier cryptosystem. Then, we present our work on an evaluation metric for the visual security level of selectively encrypted 3D objects. We present a new dataset composed of evaluated selectively encrypted 3D objects. We propose a model to determine the security parameters according to a desired security level. Finally, we detail our proposed 3DVS score which serves to measure the visual security level of selectively encrypted 3D objects. We also present a method which allows us to hierarchically decrypt an encrypted 3D object according to a generated ring of keys. This ring consists of a set of keys that allow a stronger or weaker decryption of the encrypted 3D object. Each hierarchically decrypted 3D object has a different visual security level, where the 3D object is more or less visually accessible. Our method is essential when it comes to preventing trade secrets from being leaked from within a company or by exterior attackers. It is also ecologically friendly and more secure than traditional selective encryption methods. Finally, we present our work on joint security and compression methods based on Google's 3D object compression method Draco, where we integrate a security step in Draco, which is becoming the new industry standard. These security steps are encryption, selective encryption and watermarking
Toque, Carole. "Pour l'identification de modèles factoriels de séries temporelles : application aux ARMA stationnaires". Phd thesis, Télécom ParisTech, 2006. http://pastel.archives-ouvertes.fr/pastel-00001966.
Texto completoHebert, Pierre-Alexandre. "Analyse de données sensorielles : une approche ordinale floue". Compiègne, 2004. http://www.theses.fr/2004COMP1542.
Texto completoSensory profile data aims at describing the sensory perceptions of human subjects. Such a data is composed of scores attributed by human sensory experts (or judges) in order to describe a set of products according to sensory descriptors. AlI assessments are repeated, usually three times. The thesis describes a new analysis method based on a fuzzy modelling of the scores. The first step of the method consists in extracting and encoding the relevant information of each replicate into a fuzzy weak dominance relation. Then an aggregation procedure over the replicates allows to synthesize the perception of each judge into a new fuzzy relation. Ln a similar way, a consensual relation is finally obtained for each descriptor by fusing the relations of the judges. So as to ensure the interpretation of fused relations, fuzzy preference theory is used. A set of graphical tools is then proposed for the mono and multidimensional analysis of the obtained relations
Lemaire, Sabrina. "Aide au choix des produits de construction sur la base de leurs performances environnementales et sanitaires". Lyon, INSA, 2006. http://theses.insa-lyon.fr/publication/2006ISAL0011/these.pdf.
Texto completoThis thesis aims at developing a decision-aid tool that compares building products according to their environmental and health characteristics. This tool is intended for building actors. It is based on the methodology and methods of multi-criteria analysis. The scale of the study is the one of the building component in order that the comparison is achieved using the same technical functions. The developed tool uses data from the EPDs in the French standard NF P01-010 format. It was applied to the “wall” component and to the comparison of six floorings. The obtained results have shown that it is possible to produce a ranking of building options of a component. This ranking may depend on the weighting and aggregation methods used. It has also to be completed by some sensitivity analyses. The tool has some qualities and flaws. It now requires to be tested by the building actors
Douziech, Patricia. "Les rétinoi͏̈des : données actuelles et application des formes topiques au vieillissement cutané". Toulouse 3, 1995. http://www.theses.fr/1995TOU32069.
Texto completoGodot, Xavier. "Interactions Projet/Données lors de la conception de produits multi-technologiques en contexte collaboratif". Thesis, Paris, ENSAM, 2013. http://www.theses.fr/2013ENAM0024/document.
Texto completoAs an industrial point of view, product design activity answer to firmsdevelopment needs. This activity requires a lot of heterogeneous knowledge and skills, whichhave to converge towards a common goal: describe a product meeting the market needs.Consequently, there are many interactions between the firm, its market and the design activity.Therefore, a development project must take into account specifications and constraints of eachelement. The goal of this PhD is to define a generic methodological framework allowing to builtand control a product design project depending on the firm development goals and its ownresources. For this, it is important to include many technical factors (such innovation, multitechnologicalproducts and numerical data specificities) but also economical and financialfactors (as the difficult competitive environment or limited financial resources). All theseheterogeneous parameters involve a global approach of the problem. That is why a two-stageresearch approach is applied to build this framework. In the first stage, a conceptual diagram isdesigned using items coming from the company goals, its market and design activity.Interactions and behavior of all these items are deduced from this conceptual diagram. Theseresults are formalized through a generic process. This last one is finally applied to severalexamples from SME working in the mechanical field
Grenet, Ingrid. "De l’utilisation des données publiques pour la prédiction de la toxicité des produits chimiques". Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4050.
Texto completoCurrently, chemical safety assessment mostly relies on results obtained in in vivo studies performed in laboratory animals. However, these studies are costly in term of time, money and animals used and therefore not adapted for the evaluation of thousands of compounds. In order to rapidly screen compounds for their potential toxicity and prioritize them for further testing, alternative solutions are envisioned such as in vitro assays and computational predictive models. The objective of this thesis is to evaluate how the public data from ToxCast and ToxRefDB can allow the construction of this type of models in order to predict in vivo effects induced by compounds, only based on their chemical structure. To do so, after data pre-processing, we first focus on the prediction of in vitro bioactivity from chemical structure and then on the prediction of in vivo effects from in vitro bioactivity data. For the in vitro bioactivity prediction, we build and test various models based on compounds’ chemical structure descriptors. Since learning data are highly imbalanced in favor of non-toxic compounds, we test a data augmentation technique and show that it improves models’ performances. We also perform a largescale study to predict hundreds of in vitro assays from ToxCast and show that the stacked generalization ensemble method leads to reliable models when used on their applicability domain. For the in vivo effects prediction, we evaluate the link between results from in vitro assays targeting pathways known to induce endocrine effects and in vivo effects observed in endocrine organs during longterm studies. We highlight that, unexpectedly, these assays are not predictive of the in vivo effects, which raises the crucial question of the relevance of in vitro assays. We thus hypothesize that the selection of assays able to predict in vivo effects should be based on complementary information such as, in particular, mechanistic data
Helbert, William. "Données sur la structure du grain d'amidon et des produits de recristallisation de l'amylose". Université Joseph Fourier (Grenoble ; 1971-2015), 1994. http://www.theses.fr/1994GRE10116.
Texto completoBahloul, Khaled. "Optimisation combinée des coûts de transport et de stockage dans un réseau logistique dyadique, multi-produits avec demande probabiliste". Phd thesis, INSA de Lyon, 2011. http://tel.archives-ouvertes.fr/tel-00695275.
Texto completoTrouvilliez, Benoît. "Similarités de données textuelles pour l'apprentissage de textes courts d'opinions et la recherche de produits". Thesis, Artois, 2013. http://www.theses.fr/2013ARTO0403/document.
Texto completoThis Ph.D. thesis is about the establishment of textual data similarities in the client relation domain. Two subjects are mainly considered : - the automatic analysis of short messages in response of satisfaction surveys ; - the search of products given same criteria expressed in natural language by a human through a conversation with a program. The first subject concerns the statistical informations from the surveys answers. The ideas recognized in the answers are identified, organized according to a taxonomy and quantified. The second subject concerns the transcription of some criteria over products into queries to be interpreted by a database management system. The number of criteria under consideration is wide, from simplest criteria like material or brand, until most complex criteria like color or price. The two subjects meet on the problem of establishing textual data similarities thanks to NLP techniques. The main difficulties come from the fact that the texts to be processed, written in natural language, are short ones and with lots of spell checking errors and negations. Establishment of semantic similarities between words (synonymy, antonymy, ...) and syntactic relations between syntagms (conjunction, opposition, ...) are other issues considered in our work. We also study in this Ph. D. thesis automatic clustering and classification methods in order to analyse answers to satisfaction surveys
Fadhuile-Crépy, Adelaïde. "Concurrence et différenciation des produits sur le marché des pesticides : une analyse empirique sur données françaises". Thesis, Paris 2, 2014. http://www.theses.fr/2014PA020005.
Texto completoFollowing the “Grenelle de l’Environnement” the French government takes the commitment to reduce by 50% pesticide use while maintaining current production levels. How can we reach this objective? Is this target sustainable ? This thesis analyzes the demand of farmers and its interaction with firms supply. A desagregated dataset is constructed to analyze the determinants of farmers’ practices related to the characteristics of products and firms that market them. The first chapter estimates a demand system assuming homogeneous products within categories of pesticides. It confirms that the demand is not sensitive to the prices at an aggregated level. It shows that only a very high ad valorem tax would achieve the objective of the ”Grenelle de l’Environnement”. However, this measure would significantly reduce farmers' income. Alternatively, the thesis retains simultaneous action on both the supply and the demand. First, a price index is constructed in the second chapter. It introduces the technical and regulatory specificities of these products by exploiting the panel structure of the price series. Second, the adjusted price computed indexis used in the third chapter which retains structural econometrics framework to analyze the market equilibrium. Taking into account the structure of competition we compute the margins of firms which are generated by different competition conduct. These results are used to evaluate the effect of the modification of the homologation process on margins. We find that important source of margin was generated by this regulation which sustain innovation
SIMON, BIZOU CATHERINE y MICHEL BIZOU. "Mise en place d'une banque de donnees telematique sur la toxicite des produits domestiques : etude statistique et analytique des intoxications domestiques en 1987 en region midi-pyrenees". Toulouse 3, 1988. http://www.theses.fr/1988TOU31345.
Texto completo