Tesi sul tema "Optimisation dirigée par les données"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Optimisation dirigée par les données".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.
Bouarar, Selma. "Vers une conception logique et physique des bases de données avancées dirigée par la variabilité". Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2016. http://www.theses.fr/2016ESMA0024/document.
Testo completoThe evolution of computer technology has strongly impacted the database design process which is henceforth requiring more time and resources to encompass the diversity of DB applications.Note that designers rely on their talent and knowledge, which have proven insufficient to face the increasing diversity of design choices, raising the problem of the reliability and completeness of this knowledge. This problem is well known as variability management in software engineering. While there exist some works on managing variability of physical and conceptual phases, very few have focused on logical design. Moreover, these works focus on design phases separately, thus ignore the different interdependencies. In this thesis, we first present a methodology to manage the variability of the whole DB design process using the technique of software product lines, so that (i)interdependencies between design phases can be considered, (ii) a holistic vision is provided to the designer and (iii) process automation is increased. Given the scope of the study, we proceed step-bystepin implementing this vision, by studying a case that shows: (i) the importance of logical design variability (iii) its impact on physical design (multi-phase management), (iv) the evaluation of logical design, and the impact of logical variability on the physical design (materialized view selection) in terms of non-functional requirements: execution time, energy consumption and storage space
Djilani, Zouhir. "Donner une autre vie à vos besoins fonctionnels : une approche dirigée par l'entreposage et l'analyse en ligne". Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2017. http://www.theses.fr/2017ESMA0012/document.
Testo completoFunctiona] and non-functional requirements represent the first step for the design of any application, software, system, etc. Ail the issues associated to requirements are analyzed in the Requirements Engineering (RE) field. The RE process consists of several steps consisting of discovering, analyzing, validating and evolving the requirements related to the functionalities of the system. The RE community proposed a well-defined life-cycle for the requirements process that includes the following phases: elicitation, modeling, specification, validation and management. Once the requirements are validated, they are archived or stored in repositories in companies. With the continuous storage of requirements, companies accumulate an important amount of requirements information that needs to be analyzed in order to reproduce the previous experiences and the know-how acquired by reusing and exploiting these requirements for new projects. Proposing to these companies a warehouse in which all requirements are stored represents an excellent opportunity to analyze them for decision-making purposes. Recently, the Business Process Management Community (BPM) emitted the same needs for processes. In this thesis, we want to exploit the success of data warehouses and to replicate it for functional requirements. The issues encountered in the design of data warehouses are almost identical in the case of functional requirements. Requirements are often heterogeneous, especially in the case of large companies such Airbus, where each panner bas the freedom to use its own vocabulary and formalism to describe the requirements. To reduce this heterogeneity, using ontologies is necessary. In order to ensure the autonomy of each partner, we assume that each source bas its own ontology. This requires matching efforts between ontologies to ensure the integration of functional requirements. An important feature related to the storage of requirements is that they are often expressed using semi-forma! formalisms such as use cases of UML with an important textual part. In order to get as close as possible to our contributions in data warehousing,we proposed a pivot model factorizing three well-known semi-formalisms. This pivot model is used to define the multidimensional model of the requirements warehouse, which is then alimented by the sources requirements using an ETL algorithm (Extract,Transform, Load).Using reasoning mechanisms otfered by ontologies and matching metrics, we cleaned up our requirements warehouse. Once the warehouse is deployed, it is exploited using OLAP analysis tools. Our methodology is supported by a tool covering all design phases of the requirements warehouse
Loger, Benoit. "Modèles d’optimisation basés sur les données pour la planification des opérations dans les Supply Chain industrielles". Electronic Thesis or Diss., Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2023. http://www.theses.fr/2023IMTA0389.
Testo completoWith the increasing complexity of supply chains, automated decision-support tools become necessary in order to apprehend the multiple sources of uncertainty that may impact them, while maintaining a high level of performance. To meet these objectives, managers rely more and more on approaches capable of improving the resilience of supply chains by proposing robust solutions that remain valid despite uncertainty, to guarantee both a quality of service and a control of the costs induced by the production, storage and transportation of goods. As data collection and analysis become central to define the strategy of companies, a proper usage of this information to characterize more precisely these uncertainties and their impact on operations is becoming a major challenge for optimizing modern production and distribution systems. This thesis addresses these new challenges by developing different mathematical optimization methods based on historical data, with the aim of proposing robust solutions to several supply and production planning problems. To validate the practical relevance of these new techniques, numerical experiments on various applications compare them with several other classical approachesfrom the literature. The results obtained demonstrate the value of these contributions, which offer comparable average performance while reducing their variability in an uncertain context. In particular, the solutions remain satisfactory when confronted with extreme scenarios, whose probability of occurrence is low. Finally, the computational time of the procedures developed remain competitive, making them suitable for industrial-scale applications
Ikken, Sonia. "Efficient placement design and storage cost saving for big data workflow in cloud datacenters". Thesis, Evry, Institut national des télécommunications, 2017. http://www.theses.fr/2017TELE0020/document.
Testo completoThe typical cloud big data systems are the workflow-based including MapReduce which has emerged as the paradigm of choice for developing large scale data intensive applications. Data generated by such systems are huge, valuable and stored at multiple geographical locations for reuse. Indeed, workflow systems, composed of jobs using collaborative task-based models, present new dependency and intermediate data exchange needs. This gives rise to new issues when selecting distributed data and storage resources so that the execution of tasks or job is on time, and resource usage-cost-efficient. Furthermore, the performance of the tasks processing is governed by the efficiency of the intermediate data management. In this thesis we tackle the problem of intermediate data management in cloud multi-datacenters by considering the requirements of the workflow applications generating them. For this aim, we design and develop models and algorithms for big data placement problem in the underlying geo-distributed cloud infrastructure so that the data management cost of these applications is minimized. The first addressed problem is the study of the intermediate data access behavior of tasks running in MapReduce-Hadoop cluster. Our approach develops and explores Markov model that uses spatial locality of intermediate data blocks and analyzes spill file sequentiality through a prediction algorithm. Secondly, this thesis deals with storage cost minimization of intermediate data placement in federated cloud storage. Through a federation mechanism, we propose an exact ILP algorithm to assist multiple cloud datacenters hosting the generated intermediate data dependencies of pair of files. The proposed algorithm takes into account scientific user requirements, data dependency and data size. Finally, a more generic problem is addressed in this thesis that involve two variants of the placement problem: splittable and unsplittable intermediate data dependencies. The main goal is to minimize the operational data cost according to inter and intra-job dependencies
Ikken, Sonia. "Efficient placement design and storage cost saving for big data workflow in cloud datacenters". Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2017. http://www.theses.fr/2017TELE0020.
Testo completoThe typical cloud big data systems are the workflow-based including MapReduce which has emerged as the paradigm of choice for developing large scale data intensive applications. Data generated by such systems are huge, valuable and stored at multiple geographical locations for reuse. Indeed, workflow systems, composed of jobs using collaborative task-based models, present new dependency and intermediate data exchange needs. This gives rise to new issues when selecting distributed data and storage resources so that the execution of tasks or job is on time, and resource usage-cost-efficient. Furthermore, the performance of the tasks processing is governed by the efficiency of the intermediate data management. In this thesis we tackle the problem of intermediate data management in cloud multi-datacenters by considering the requirements of the workflow applications generating them. For this aim, we design and develop models and algorithms for big data placement problem in the underlying geo-distributed cloud infrastructure so that the data management cost of these applications is minimized. The first addressed problem is the study of the intermediate data access behavior of tasks running in MapReduce-Hadoop cluster. Our approach develops and explores Markov model that uses spatial locality of intermediate data blocks and analyzes spill file sequentiality through a prediction algorithm. Secondly, this thesis deals with storage cost minimization of intermediate data placement in federated cloud storage. Through a federation mechanism, we propose an exact ILP algorithm to assist multiple cloud datacenters hosting the generated intermediate data dependencies of pair of files. The proposed algorithm takes into account scientific user requirements, data dependency and data size. Finally, a more generic problem is addressed in this thesis that involve two variants of the placement problem: splittable and unsplittable intermediate data dependencies. The main goal is to minimize the operational data cost according to inter and intra-job dependencies
Atigui, Faten. "Approche dirigée par les modèles pour l’implantation et la réduction d’entrepôts de données". Thesis, Toulouse 1, 2013. http://www.theses.fr/2013TOU10044/document.
Testo completoOur work handles decision support systems based on multidimensional Data Warehouse (DW). A Data Warehouse (DW) is a huge amount of data, often historical, used for complex and sophisticated analysis. It supports the business process within an organization. The relevant data for the decision-making process are collected from data sources by means of software processes commonly known as ETL (Extraction-Transformation-Loading) processes. The study of existing systems and methods shows two major limits. Actually, when building a DW, the designer deals with two major issues. The first issue treats the DW's design, whereas the second addresses the ETL processes design. Current frameworks provide partial solutions that focus either on the multidimensional structure or on the ETL processes, yet both could benefit from each other. However, few studies have considered these issues in a unified framework and have provided solutions to automate all of these tasks. Since its creation, the DW has a large amount of data, mainly due to the historical data. Looking into the decision maker's analysis over time, we can see that they are usually less interested in old data.To overcome these shortcomings, this thesis aims to formalize the development of a time-varying (with a temporal dimension) DW from its design to its physical implementation. We use the Model Driven Engineering (MDE) that automates the process and thus significantly reduce development costs and improve the software quality. The contributions of this thesis are summarized as follows: 1. To formalize and to automate the development of a time-varying DW within a model-driven approach that provides: - A set of unified (conceptual, logical and physical) metamodels that describe data and transformation operations. - An OCL (Object Constraint Language) extension that aims to conceptually formalize the transformation operations. - A set of transformation rules that maps the conceptual model to logical and physical models. - A set of transformation rules that generates the code. 2. To formalize and to automate historical data reduction within a model-driven approach that provides : - A set of (conceptual, logical and physical) metamodels that describe the reduced data. - A set of reduction operations. - A set of transformation rules that implement these operations at the physical level.In order to validate our proposals, we have developed a prototype composed of three parts. The first part performs the transformation of models to lower level models. The second part transforms the physical model into code. The last part allows the DW reduction
Mahéo, Yves. "Environnements pour la compilation dirigée par les données : supports d'exécution et expérimentations". Phd thesis, Université Rennes 1, 1995. http://tel.archives-ouvertes.fr/tel-00497580.
Testo completoMaheo, Yves. "Environnement pour la compilation dirigée par les données : supports d'exécution et expérimentations". Rennes 1, 1995. http://www.theses.fr/1995REN10059.
Testo completoBeneyton, Thomas. "Évolution dirigée et biopile enzymatique : étude de la laccase CotA et optimisation par évolution dirigée en microfluidique digitale". Strasbourg, 2011. https://publication-theses.unistra.fr/public/theses_doctorat/2011/BENEYTON_Thomas_2011.pdf.
Testo completoEnzymatic biofuel cells have been recently developed to create miniature renewable electricity sources. However, this new technology is still limited in terms of power and lifetime compared to classical fuel cells. Although it has been rarely used yet, one strategy to improve these performances is to optimize the catalytic and stability properties of the enzymes. This PhD work describes the development of a droplet-based microfluidic platform for thr directed evolution of CotA laccase from Bacillus subtilis for enzymatic biofuel cells application. This work demonstrates the possibility of using an extremophilic enzyme inside an enzymatique biofuel cell. The efficiency of CotA as a biocatalyst for O2 reduction has been evaluated for the first time developing biocathodes or complete Glucose/O2 biofuel cells. A droplet-based microfluidic high-throughput screening platform for CotA directed evolution has also been developed and validated. This platform allows the encapsulation of E. Coli cells expressing the protein in aqueous droplets of few picoliters, the incubation of droplets, the addition of the substrate using picoinjection and then the detection and the sorting of CotA enzymatic activity using very high-throughput (1 million clones in only 4 hours). The platform can be directly used for the screening of mutant libraries. Optimized selected mutants would lead to the creation of a new and more efficient generation of enzymatic biofuel cells. This universal droplet-based microfluidic screening platform is a very powerful tool for directed evolution of proteins
Ait, Brahim Amal. "Approche dirigée par les modèles pour l'implantation de bases de données massives sur des SGBD NoSQL". Thesis, Toulouse 1, 2018. http://www.theses.fr/2018TOU10025/document.
Testo completoLe résumé en anglais n'a pas été communiqué par l'auteur
Rahmoun, Smail. "Optimisation multi-objectifs d'architectures par composition de transformation de modèles". Electronic Thesis or Diss., Paris, ENST, 2017. http://www.theses.fr/2017ENST0004.
Testo completoIn this thesis, we propose a new exploration approach to tackle design space exploration problems involving multiple conflicting non functional properties. More precisely, we propose the use of model transformation compositions to automate the production of architectural alternatives, and multiple-objective evolutionary algorithms to identify near-optimal architectural alternatives. Model transformations alternatives are mapped into evolutionary algorithms and combined with genetic operators such as mutation and crossover. Taking advantage of this contribution, we can (re)-use different model transformations, and thus solve different multiple-objective optimization problems. In addition to that, model transformations can be chained together in order to ease their maintainability and re-usability, and thus conceive more detailed and complex systems
Martinez, Medina Lourdes. "Optimisation des requêtes distribuées par apprentissage". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM015.
Testo completoDistributed data systems are becoming increasingly complex. They interconnect devices (e.g. smartphones, tablets, etc.) that are heterogeneous, autonomous, either static or mobile, and with physical limitations. Such devices run applications (e.g. virtual games, social networks, etc.) for the online interaction of users producing / consuming data on demand or continuously. The characteristics of these systems add new dimensions to the query optimization problem, such as multi-optimization criteria, scarce information on data, lack of global system view, among others. Traditional query optimization techniques focus on semi (or not at all) autonomous systems. They rely on information about data and make strong assumptions about the system behavior. Moreover, most of these techniques are centered on the optimization of execution time only. The difficulty for evaluating queries efficiently on nowadays applications motivates this work to revisit traditional query optimization techniques. This thesis faces these challenges by adapting the Case Based Reasoning (CBR) paradigm to query processing, providing a way to optimize queries when there is no prior knowledge of data. It focuses on optimizing queries using cases generated from the evaluation of similar past queries. A query case comprises: (i) the query, (ii) the query plan and (iii) the measures (computational resources consumed) of the query plan. The thesis also concerns the way the CBR process interacts with the query plan generation process. This process uses classical heuristics and makes decisions randomly (e.g. when there are no statistics for join ordering and selection of algorithms, routing protocols). It also (re)uses cases (existing query plans) for similar queries parts, improving the query optimization, and therefore evaluation efficiency. The propositions of this thesis have been validated within the CoBRa optimizer developed in the context of the UBIQUEST project
Bradai, Benazouz. "Optimisation des Lois de Commande d’Éclairage Automobile par Fusion de Données". Mulhouse, 2007. http://www.theses.fr/2007MULH0863.
Testo completoNight-time driving with conventional headlamps is particularly unsafe. Indeed, if one drives much less at night, more than half of the driving fatalities occur during this period. To reduce these figures, several automotive manufacturers and suppliers participated to the European project “Adaptive Front lighting System” (AFS). This project has the aim to define new lightings functions based on a beam adaptation to the driving situation. And, it has to end in 2008 with a change of regulation of the automotive lighting allowing so realisation of all new AFS functions. For that, they explore the possible realisation of such new lighting functions, and study the relevance, the efficiency according to the driving situation, but also the dangers associated with the use, for these lighting functions, of information from the vehicle or from the environment. Since 2003, some vehicles are equipped by bending lights, taking account only of actions of the driver on the steering wheel. These solutions make it possible to improve the visibility by directing the beam towards the interior of the bend. However, the road profile (intersections, bends, etc) not being always known for the driver, the performances related to these solutions are consequently limited. However the embedded navigation systems, on the one hand can contain information on this road profile, and on the other hand have contextual information (engineering works, road type, curve radius, speed limits …). The topic of this thesis aims to optimize lighting control laws based on fusion of navigation systems information with those of vehicle embedded sensors (cameras,…), with consideration of their efficiency and reliability. Thus, this information fusion, applied here to the decision-making, makes it possible to define driving situations and contexts of the vehicle evolution environment (motorway, city, etc) and to choose the appropriate law among the various of developed lighting control laws (code motorway lighting, town lighting, bending light). This approach makes it possible to choose in real time, and by anticipation, between these various lighting control laws. It allows, consequently, the improvement of the robustness of the lighting system. Two points are at the origin of this improvement. Firstly, using the navigation system information, we developed a virtual sensor of event-based electronic horizon analysis allowing an accurate determination of various driving situations. It uses a finite state machine. It thus makes it possible to mitigate the problems of the ponctual nature of the navigation system information. Secondly, we developed a generic virtual sensor of driving situations determination based on the evidence theory of using a navigation system and the vision. This sensor combines confidences coming from the two sources for better distinguishing between the various driving situations and contexts and to mitigate the problems of the two sources taken independently. It also allows building a confidence of the navigation system using some of their criteria. This generic sensor is generalizable with other assistance systems (ADAS) that lighting one. This was shown by applying it to a speed limit detection system SLS (Speed Limit Support). The two developed virtual sensors were applied to the optimization of lighting system (AFS) and for the SLS system. These two systems were implemented on an experimental vehicle (demonstration vehicle) and they are currently operational. They were evaluated by various types of driver going from non experts to experts. They were also shown to car manufacturers (PSA, Audi, Renault, Honda, etc. ) and during different techdays. They proved their reliability during these demonstrations on open roads with various driving situations and contexts
Rahmoun, Smail. "Optimisation multi-objectifs d'architectures par composition de transformation de modèles". Thesis, Paris, ENST, 2017. http://www.theses.fr/2017ENST0004/document.
Testo completoIn this thesis, we propose a new exploration approach to tackle design space exploration problems involving multiple conflicting non functional properties. More precisely, we propose the use of model transformation compositions to automate the production of architectural alternatives, and multiple-objective evolutionary algorithms to identify near-optimal architectural alternatives. Model transformations alternatives are mapped into evolutionary algorithms and combined with genetic operators such as mutation and crossover. Taking advantage of this contribution, we can (re)-use different model transformations, and thus solve different multiple-objective optimization problems. In addition to that, model transformations can be chained together in order to ease their maintainability and re-usability, and thus conceive more detailed and complex systems
Fankam, Nguemkam Chimène. "OntoDB2 : un système flexible et efficient de base de données à base ontologique pour le web sémantique et les données techniques". Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aéronautique, 2009. https://tel.archives-ouvertes.fr/tel-00452533.
Testo completoThe need to represent the semantics of data in various scientific fields (medicine, geography, engineering, etc…) has resulted in the definition of data referring to ontologies, also called ontology-based data. With the proliferation of domain ontologies, and the increasing volume of data to handle, has emerge the need to define systems capable of managing large size of ontology-based data. Such systems are called Ontology Based DataBase (OBDB) Management Systems. The main limitations of existing OBDB systems are (1) their rigidity, (2) lack of support for non standard data (spatial, temporal, etc…) and (3) their lack of effectiveness to manage large size data. In this thesis, we propose a new OBDB called OntoDB2, allowing (1) the support of ontologies based on different ontology models, (2) the extension of its model to meet specific applications requirements, and (3) an original management of ontology-based data facilitating scalability. Onto DB2 is based on the existence of a kernel ontology, and model-based techniques to enable a flexible extension of this kernel. We propose to represent only canonical data by transforming, under certain conditions, any given non-canonical data to its canonical representation. We propose to use the ontology query language to (1) to access non-canonical data thereby transform and, (2) index and pre-calculate the reasoning operations by using the mechanisms of the underlying DBMS
Mahboubi, Hadj. "Optimisation de la performance des entrepôts de données XML par fragmentation et répartition". Phd thesis, Université Lumière - Lyon II, 2008. http://tel.archives-ouvertes.fr/tel-00350301.
Testo completoPour atteindre cet objectif, nous proposons dans ce mémoire de pallier conjointement ces limitations par fragmentation puis par répartition sur une grille de données. Pour cela, nous nous sommes intéressés dans un premier temps à la fragmentation des entrepôts des données XML et nous avons proposé des méthodes qui sont à notre connaissance les premières contributions dans ce domaine. Ces méthodes exploitent une charge de requêtes XQuery pour déduire un schéma de fragmentation horizontale dérivée.
Nous avons tout d'abord proposé l'adaptation des techniques les plus efficaces du domaine relationnel aux entrepôts de données XML, puis une méthode de fragmentation originale basée sur la technique de classification k-means. Cette dernière nous a permis de contrôler le nombre de fragments. Nous avons finalement proposé une approche de répartition d'un entrepôt de données XML sur une grille. Ces propositions nous ont amené à proposer un modèle de référence pour les entrepôts de données XML qui unifie et étend les modèles existants dans la littérature.
Nous avons finalement choisi de valider nos méthodes de manière expérimentale. Pour cela, nous avons conçu et développé un banc d'essais pour les entrepôts de données XML : XWeB. Les résultats expérimentaux que nous avons obtenus montrent que nous avons atteint notre objectif de maîtriser le volume de données XML et le temps de traitement de requêtes décisionnelles complexes. Ils montrent également que notre méthode de fragmentation basée sur les k-means fournit un gain de performance plus élevé que celui obtenu par les méthodes de fragmentation horizontale dérivée classiques, à la fois en terme de gain de performance et de surcharge des algorithmes.
Lu, Yanping. "Optimisation par essaim de particules application au clustering des données de grandes dimensions". Thèse, Université de Sherbrooke, 2009. http://savoirs.usherbrooke.ca/handle/11143/5112.
Testo completoMenet, Ludovic. "Formalisation d'une approche d'Ingénierie Dirigée par les Modèles appliquée au domaine de la gestion des données de référence". Paris 8, 2010. http://www.theses.fr/2010PA083184.
Testo completoOur research work is in line with the problematic of data models definition in the framework of Master Data Management. Indeed, Model Driven Engineering (MDE) is a theme in great expansion in the academic world as well as in the industrial world. It brings an important change in the conception of applications taking in account the durability of savoir-faire and of gains of productivity, and taking profits of platforms advantages without suffering of secondary effects. The MDE architecture is based on the transformation of models to come to a technical solution on a chosen platform from independent business models of any platform. In this thesis, a conceptual and technical thought process of the MDE approach is applied to the definition of pivot data models, which are the base of Master Data Management (MDM). Thus, we use Unified Modeling Language (UML) as formalism to describe the independent aspects of the platform (business model), and we propose a meta-model, in the form of an UML profile, to describe the dependent aspects of the MDM platform. Then, we present our approach to move from a business model to a platform model to be able to generate the physical pivot model. The inputs of the thesis are : the study of a MDE approach in the MDM context, the definition of UML transformations towards a MDM model (based on a XML Schema structure), besides we bring a new aspect to MDE applied to MDM, that is to say the definition of a method for incremental model validation allowing the optimization of validation stages during model conception
Dupuis, Sophie. "Optimisation automatique des chemins de données arithmétiques par l’utilisation des systèmes de numération redondants". Paris 6, 2009. http://www.theses.fr/2009PA066131.
Testo completoPianelo, Laurent. "Modélisation géologique contrainte par les données sismiques et dynamiques". Aix-Marseille 1, 2001. http://www.theses.fr/2001AIX11042.
Testo completoPierrillas, Philippe. "Optimisation du développement clinique de nouveaux anticancéreux par modélisation de données pharmacocinétiques et pharmacodynamiques précliniques". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1047.
Testo completoImprovement of drug development is a very challenging question and even more in the field of oncology wherein the need for new medicines is crucial. In addition, the rate of approval for anticancer drugs after entry in phase I clinical trial was reported as one of the lowest of all therapeutic areas. Thereby, this process has to be improved, and the use of new approaches fulfilling the gap between preclinical and clinical settings by anticipating human pharmacokinetics and efficacy could be an interesting solution.The work is focused on the building of strategies based on mathematical modeling of in vivo and in vitro preclinical data to anticipate the behavior of a new bcl-2 inhibitor developed by Servier laboratories in human to support clinical development. This project was elaborated following different steps:Firstly, a semi-mechanistic relationship was established in mice to describe the mechanism of action of the compound.PK extrapolation strategy using PBPK modeling was performed to anticipate human concentration-time profiles.PD extrapolation strategies based on different assumptions were proposed to predict human efficacy and doses to be tested in clinical trial.Predictions obtained were consequently compared to clinical results from a First in Human study confirming the usefulness of such approaches and the superiority of mechanism-based strategies compared to more empirical approaches.Therefore, this project highlights the large interest of elaborating interspecies translational approaches during drug development and could promote their use to accelerate new entities development, decreasing the risks of failure and financial costs
Ghilardi, Jean-Pierre. "Optimisation de la représentation de graphes par approche hybride déterministe et stochastique". Aix-Marseille 3, 2002. http://www.theses.fr/2002AIX30032.
Testo completoIn bibliometrie scope, we frequently have to compute database which constitute a quantitive information corpus, difficult to interpret by direct reading. That's the reason why some tools with complex mathematic treatments have been created, this is how structured data bank can be processing to obtain relevant information available for decision makers. The Centre de Recherche Rétrospective de Marseille is specialized in information processing for a long time. Automatic tools based on geometrical representation of relationship between entities have been developed. During this research, an innovant data processing implemented to automatically produce an organized representation of graph easily understandable have been defined. The treatment chain is based on two different approach, a determinist approach issuing from graph theory and a stochastic approach composed of simulated annealing algorithm and genetic algorithm, which allow to make easier graph reading
Oudart, David. "Application de l'ingénierie dirigée par les modèles à la conception de Smart Grids : approche par cosimulation avec FMI". Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAS002.
Testo completoSmart Grids are cyber-physical systems that interface power grids with information and communication technologies to monitor them, automate decision making and balance production with consumption. We want to use simulation to easily evaluate and compare several solutions before deployment in a real environment. The objective of this thesis is thus to propose tools and methods to model and simulate a Smart Grid in an industrial context. We have identified two main issues: How to combine heterogeneous models of a Smart Grid to simulate it ? How to ensure consistency between the models produced by different stakeholders during the design of a Smart Grid ? To address these issues, we propose a cosimulation approach, using the Functional Mockup Interface (FMI) standard. Our first two contributions are the proposal of a method to allow the exchange of discrete signals between several FMUs, and an extension of the OMNeT++ telecommunications simulation software implementing this method, called fmi4omnetpp. A third contribution is the development of the Smart Grid Simulation Framework tooled environment, which automates a number of repetitive tasks in order to ensure consistency between different simulation models. Finally, a fourth contribution is the formalization of an iterative design approach for the cosimulation of a Smart Grid, and how to integrate our Smart Grid Simulation Framework into it. To do so, we explain the different steps of the approach and the role of the actors involved in the design process, then we present its application to a real case study for which we use our Smart Grid Simulation Framework
Brottier, Erwan. "Acquisition et analyse des exigences pour le développement logiciel : une approche dirigée par les modèles". Phd thesis, Université Rennes 1, 2009. http://tel.archives-ouvertes.fr/tel-00512174.
Testo completoAhmed, Ahmed. "Utilisation de l'ingénierie dirigée par les modèles pour l'agrégation continue de données hétérogènes : application à la supervision de réseaux de gaz". Thesis, Paris, ENSAM, 2018. http://www.theses.fr/2018ENAM0049/document.
Testo completoOver the last decade, the information technology and industrial infrastructures have evolved from containing monolithic systems to heterogeneous, autonomous, and widely distributed systems. Most systems cannot coexist while completely isolated and need to share their data in order to increase business productivity. In fact, we are moving towards larger complex systems where millions of systems and applications need to be integrated. Thus, the requirement of an inexpensive and fast interoperability solution becomes an essential need. The existing solutions today impose standards or middleware to handle this issue. However, these solutions are not sufficient and often require specific ad-hoc developments. Thus, this work proposes the study and the development of a generic, modular, agnostic and extensible interoperability architecture based on modeling principles and software engineering aspects. It aims to promote interoperability and data exchange between heterogeneous systems in real time without requiring systems to comply with specific standards or technologies. The industrial use cases for this work takes place in the context of the French gas distribution network. The theoretical and empirical validation of our proposal corroborates assumptions that the interoperability between heterogeneous systems can be achieved by using the aspects of separation of concerns and model-driven engineering. The cost and time to promote the interoperability are also reduced by promoting the characteristics of re-usability and extensibility
Menou, Edern. "Conception d’alliages par optimisation combinatoire multiobjectifs : thermodynamique prédictive, fouille de données, algorithmes génétiques et analyse décisionnelle". Thesis, Nantes, 2016. http://www.theses.fr/2016NANT4011/document.
Testo completoThe present work revolves around the development of an integrated system combining a multi-objective genetic algorithm with calphad-type computational thermodynamics (calculations of phase diagrams) and data mining techniques enabling the estimation of thermochemical and thermomechanical properties of multicomponent alloys. This integration allows the quasiautonomous chemistry optimisation of complex alloys against antagonistic criteria such as mechanical and chemical resistance, high-temperature microstructural stability, and cost. Further alloy selection capability is provided by a multi-criteria decision analysis technique. The proposed design methodology is illustrated on two multicomponent alloy families. The first case study relates to the design of wrought, polycrystalline 0-hardened nickel-base superalloys intended for aerospace turbine disks or tubing applications in the energy industry. The optimisation leads to the discovery of novel superalloys featuring lower costs and higher predicted strength than Inconel 740H and Haynes 282, two state-of-the-art superalloys. The second case study concerns the so-called “high-entropy alloys” whose singular metallurgy embodies typical combinatorial issues. Following the optimisation, several high-entropy alloys are produced; preliminary experimental characterisation highlights attractive properties such as an unprecedented hardness to density ratio
Dehainsala, Hondjack. "Explicitation de la sémantique dans les bases de données : base de données à base ontologique et le modèle OntoDB". Poitiers, 2007. http://www.theses.fr/2007POIT2270.
Testo completoAn Ontology–Based DataBase (OBDB) is a database which allows to store both data and ontologies that define data meaning. In this thesis, we propose a new architecture model for OBDB, called OntoDB. This model has two main original features. First, like usual databases, each stored entity is associated with a logical schema which define the structure of all its instances. Thus, our approach provides for adding ontology to existing database for semantic indexation of its content. Second, meta-model of the ontology model is also represented in the same database. This allows to support change and evolution of ontology models. The OntoDB model has been validated by a prototype. Performance evaluation of this prototype has been done and has shown that our approach allows to manage very large data and supports scalability much better than the previously proposed approaches
Shahzad, Muhammad Atif. "Une approche hybride de simulation-optimisation basée sur la fouille de données pour les problèmes d'ordonnancement". Nantes, 2011. http://archive.bu.univ-nantes.fr/pollux/show.action?id=53c8638a-977a-4b85-8c12-6dc88d92f372.
Testo completoA data mining based approach to discover previously unknown priority dispatching rules for job shop scheduling problem is presented. This approach is based upon seeking the knowledge that is assumed to be embedded in the efficient solutions provided by the optimization module built using tabu search. The objective is to discover the scheduling concepts using data mining and hence to obtain a set of rules capable of approximating the efficient solutions for a job shop scheduling problem (JSSP). A data mining based scheduling framework is presented and implemented for a job shop problem with maximum lateness and mean tardiness as the scheduling objectives. The results obtained are very promising
Hnayno, Mohamad. "Optimisation des performances énergétiques des centres de données : du composant au bâtiment". Electronic Thesis or Diss., Reims, 2023. http://www.theses.fr/2023REIMS021.
Testo completoData centers consume vast amounts of electrical energy to power their IT equipment, cooling systems, and supporting infrastructure. This high energy consumption contributes to the overall demand on the electrical grid and release of greenhouse gas emissions. By optimizing energy performance, data centers can reduce their electricity bills, overall operating costs and their environmental impact. This includes implementing energy-efficient technologies, improving cooling systems, and adopting efficient power management practices. Adopting new cooling solutions, such as liquid cooling and indirect evaporative cooling, offer higher energy efficiency and can significantly reduce the cooling-related energy consumption in data centres.In this work, two experimental investigations on a new cooling topologies for information technology racks are conducted. In the first topology, the rack-cooling system is based on a combination of close-coupled cooling and direct-to-chip cooling. Five racks with operational servers were tested. Two temperature difference (15 K and 20 K) was validated for all the IT racks. The impact of these temperature difference profiles on the data-centre performance was analysed using three heat rejection systems under four climatic conditions for a data centre of 600 kW. The impact of the water temperature profile on the partial power usage effectiveness and water usage effectiveness of data centre was analysed to optimise the indirect free cooling system equipped with an evaporative cooling system through two approaches: rack temperature difference and by increasing the water inlet temperature of the data centre. In the second topology, an experimental investigation conducted on a new single-phase immersion/liquid-cooling technique is developed. The experimental setup tested the impact of three dielectric fluids, the effect of the water circuit configuration, and the server power/profile. Results suggest that the system cooling demand depends on the fluid’s viscosity. As the viscosity increased from 4.6 to 9.8 mPa.s, the cooling performance decreased by approximately 6 %. Moreover, all the IT server profiles were validated at various water inlet temperatures up to 45°C and flow rates. The energy performance of this technique and the previous technique was compared. This technique showed a reduction in the DC electrical power consumption by at least 20.7 % compared to the liquid-cooling system. The cooling performance of the air- and liquid-cooled systems and the proposed solution was compared computationally at the server level. When using the proposed solution, the energy consumed per server was reduced by at least 20 % compared with the air-cooling system and 7 % compared with liquid-cooling system.In addition, a new liquid cooling technology for 600 kW Uninterruptible Power Supply (UPS) units. This cooling architecture gives more opportunities to use free cooling as a main and unique cooling system for optimal data centres (DCs). Five thermal hydraulic tests are conducted with different thermal conditions. A 20 K temperature difference profile was validated with a safe operation for all UPS electronic equipment resulting with a thermal efficiency of 82.27 %. The impact of decreasing water flow rate and increasing water and air room temperatures was also analysed. A decrease in inlet water and air temperatures from 41°C to 32°C and from 47°C to 40°C respectively increases the thermal efficiency by 8.64 %. Furthermore, an energy performance analysis comparison is made between air cooled and water cooled UPS units on both UPS and infrastructure levels
Troudi, Molka. "Optimisation du paramètre de lissage pour l'estimateur à noyau par des algorithmes itératifs : application à des données réelles". Télécom Bretagne, 2009. http://www.theses.fr/2009TELB0088.
Testo completoAit, Oubelli Lynda. "Transformations sémantiques pour l'évolution des modèles de données". Thesis, Toulouse, INPT, 2020. http://www.theses.fr/2020INPT0040.
Testo completoWhen developing a complex system, data models are the key to a successful engineering process because they contain and organize all the information manipulated by the different functions involved in system design. The fact that the data models evolve throughout the design raises problems of maintenance of the data already produced. Our work addresses the issue of evolving data models in a model-driven engineering environment (IDM). We focus on minimizing the impact of the evolution of the data model on the system development process in the specific area of space engineering. In the space industry, model-driven engineering (MDI) is a key area for modeling data exchange with satellites. When preparing a space mission, the associated data models are often updated and must be compared from one version to another. Thus, because of the growth of the changes, it becomes difficult to follow them. New methods and techniques to understand and represent the differences and commonalities between different versions of the model are essential. Recent research deals with the evolution process between the two architectural layers (M2 / M1) of the IDM. In this thesis, we have explored the use of the (M1 / M0) layers of the same architecture to define a set of complex operators and their composition that encapsulate both the evolution of the data model and the data migration. The use of these operators improves the quality of results when migrating data, ensuring the complete preservation of the information contained in the data. In the first part of this thesis, we focused on how to deal with structural differences during the evolution process. The proposed approach is based on the detection of differences and the construction of evolution operators. Then, we studied the performance of the model-based approach (MBD) on two space missions, named PHARAO and MICROSCOPE. Then, we presented a semantic observational approach to deal with the evolution of data models at M1 level. The main interest of the proposed approach is the transposition of the problem of accessibility of the information in a data model, into a problem of path in a labeled directed graph. The approach proved to be able to capture all the evolutions of a data model in a logical operator list instead of a non-exhaustive list of evolution operators. It is generic because, regardless of the type of input data model, if the data model is correctly interpreted to ldg and then project it onto a set of lts, we can check the conservation of the information
Vo, Xuan Thanh. "Apprentissage avec la parcimonie et sur des données incertaines par la programmation DC et DCA". Electronic Thesis or Diss., Université de Lorraine, 2015. http://www.theses.fr/2015LORR0193.
Testo completoIn this thesis, we focus on developing optimization approaches for solving some classes of optimization problems in sparsity and robust optimization for data uncertainty. Our methods are based on DC (Difference of Convex functions) programming and DCA (DC Algorithms) which are well-known as powerful tools in optimization. This thesis is composed of two parts: the first part concerns with sparsity while the second part deals with uncertainty. In the first part, a unified DC approximation approach to optimization problem involving the zero-norm in objective is thoroughly studied on both theoretical and computational aspects. We consider a common DC approximation of zero-norm that includes all standard sparse inducing penalty functions, and develop general DCA schemes that cover all standard algorithms in the field. Next, the thesis turns to the nonnegative matrix factorization (NMF) problem. We investigate the structure of the considered problem and provide appropriate DCA based algorithms. To enhance the performance of NMF, the sparse NMF formulations are proposed. Continuing this topic, we study the dictionary learning problem where sparse representation plays a crucial role. In the second part, we exploit robust optimization technique to deal with data uncertainty for two important problems in machine learning: feature selection in linear Support Vector Machines and clustering. In this context, individual data point is uncertain but varies in a bounded uncertainty set. Different models (box/spherical/ellipsoidal) related to uncertain data are studied. DCA based algorithms are developed to solve the robust problems
Moalla, Néjib. "Amélioration de la qualité des données du produit dans le contexte du cycle de vie d’un vaccin : une approche d’interopérabilité dirigée par les modèles". Lyon 2, 2007. http://theses.univ-lyon2.fr/sdx/theses/lyon2/2007/moalla_n.
Testo completoTo reach the industrial excellence, data quality is one of the essential pillars to handle in any improvement or optimization approach. Thus, data quality is a paramount need to ensure that the product meets the customer requirements. In the drug company and more particularly, in the vaccine industry, the definition of vaccine product is very complex considering its molecular structure. Data quality proves to be a priority according to many product definitions (biological, pharmaceutical, industrial, etc) and especially face to a lot of restrictions and regulatory recommendations imposed by customers as health authorities. In this context, and in front of the multitude of business activities supported by disconnected information systems, the need to ensure interoperability between these heterogeneous systems will make it possible to handle the specifications of various business scope during the exchanges of information. The deployment of model driven architecture will enable to transform a functional description of processes towards data models expressed in various platforms. In the logistic perimeter of the vaccines industry, we are interested to ensure the quality of some critical data in our ERP by the deployment of the concepts of model driven interoperability. The definition of various levels of reference frames will enable us to structure the models thus generated to share them with logistic perimeter actors. In the long run, our approach aims at reducing the cost of the product
Fawaz, Yaser. "Composition et exécution contextualisées de services pour des environnements pervasifs : une approche dirigée par les données : "application à l'adaptation et au transfert de contenus"". Lyon, INSA, 2010. http://theses.insa-lyon.fr/publication/2010ISAL0037/these.pdf.
Testo completoDes environnements informatiques pervasifs sans infrastructure tels que les réseaux mobiles spontanés (MANETs) soulève de nouveaux défis quant à l’exécution d’applications dirigées par les données. Dans cette thèse, nous proposons un nouveau middleware appelé ConAMi (Context-Aware service composition and execution Middleware) qui permet aux dispositifs dans un MANET de collaborer les uns avec les autres afin d'exécuter des applications dirigées par les données d’une manière efficace et fiable. Le défi principal abordé dans cette thèse est la détermination de la composition optimale de services car plusieurs compositions de services peuvent offrir la même fonctionnalité pour exécuter un flux de tâches. Ce défi est abordé via le développement d’un algorithme qui organise les services dans ce que nous appelons un arbre de composition de services. Le principal critère considéré pour déterminer la composition optimale de services est le temps d'exécution global qui comprend le temps de transfert de données et le temps d'exécution des services. L’exécution du flux de tâches peut échouer facilement en raison de la mobilité des dispositifs impliqués dans les MANETs. Pour assurer une exécution fiable du flux de tâches, le "Time-To-Leave" (TTL) du service est considéré lors de la détermination de la composition optimale de services. Néanmoins, le TTL ne peut pas donner une garantie d’absence d’erreurs car il est fondé sur une estimation. En outre, l’exécution du flux de tâches peut aussi échouer en raison d'autres types d’erreurs. En conséquence, le middleware ConAMi inclut des mécanismes originaux de détection et de récupération d’erreurs. Nous avons développé un prototype pour mettre en œuvre le middleware ConAMi et évaluer ses performances. Les résultats des expériences montrent que le middleware ConAMi a de meilleures performances que les approches similaires. ConAMi garantit l'efficacité, la fiabilité et l'équilibrage de la charge des dispositifs
Terret, Catherine. "Optimisation de la chimiothérapie du cancer colorectal métastatique par 5-FU et CPT-11 : données de pharmacocinétique, de chimiosensibilité". Toulouse 3, 2000. http://www.theses.fr/2000TOU30166.
Testo completoChen, Xiao. "Contrôle et optimisation de la perception humaine sur les vêtements virtuels par évaluation sensorielle et apprentissage de données expérimentales". Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10019/document.
Testo completoUnder the exacerbated worldwide competition, the mass customization or personalization of products is now becoming an important strategy for companies to enhance the perceived value of their products. However, the current online customization experiences are not fully satisfying for consumers because the choices are mostly limited to colors and motifs. The sensory fields of products, particularly the material’s appearance and hand as well as the garment fit are barely concerned.In my PhD research project, we have proposed a new collaborative design platform. It permits merchants, designers and consumers to have a new experience during the development of highly valued personalized garments without extra industrial costs. The construction of this platform consists of several parts. At first, we have selected, through a sensory experiment, an appropriate 3D garment CAD software in terms of rending quality. Then we have proposed an active leaning-based experimental design in order to find the most appropriate values of the fabric technical parameters permitting to minimize the overall perceptual difference between real and virtual fabrics in static and dynamic scenarios. Afterwards, we have quantitatively characterized the human perception on virtual garment by using a number of normalized sensory descriptors. These descriptors involve not only the appearance and the hand of the fabric but also the garment fit. The corresponding sensory data have been collected through two sensory experiments respectively. By learning from the experimental data, two models have been established. The first model permits to characterize the relationship between the appearance and hand perception of virtual fabrics and corresponding technical parameters that constitute the inputs of the 3D garment CAD software. The second model concerns the relationship between virtual garment fit perception and the pattern design parameters. These two models constitute the main components of the collaborative design platform. Using this platform, we have realized a number of garments meeting consumer’s personalized perceptual requirements
Azzabi, Zouraq Brahim. "Optimisation du procédé de contrôle non destructif par thermographie inductive pour des applications du domaine nucléaire". Thesis, Nantes, 2019. http://www.theses.fr/2019NANT4023/document.
Testo completoThe work of this thesis deals with an innovative non-destructive testing (NDT) technique and its adaptation to the civil nuclear field. The numerical tool is used for this purpose. An exhaustive presentation of numerical models adapted to our problematic is made at first. These tools are then implemented and their performance compared. This made it possible to set up a fast digital tool capable of taking into account different modeling constraints such as circuit coupling, modeling of pronounced skineffect regions, as well as modeling of thin defects. This was followed by an experimental validation of the performances. Once the tool implemented and validated, it was exploited as part of a reliability assessment approach based on a MAPOD approach. In this context, an entire system for drawing input data and managing output data will be established. The result of this is a reliable and fast software tool dedicated to the evaluation of the sensibility of the thermoinductive technique
Hodique, Yann. "Sûreté et optimisation par les systèmes de types en contexte ouvert et contraint". Lille 1, 2007. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/2007/50376-2007-15.pdf.
Testo completoVo, Xuan Thanh. "Apprentissage avec la parcimonie et sur des données incertaines par la programmation DC et DCA". Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0193/document.
Testo completoIn this thesis, we focus on developing optimization approaches for solving some classes of optimization problems in sparsity and robust optimization for data uncertainty. Our methods are based on DC (Difference of Convex functions) programming and DCA (DC Algorithms) which are well-known as powerful tools in optimization. This thesis is composed of two parts: the first part concerns with sparsity while the second part deals with uncertainty. In the first part, a unified DC approximation approach to optimization problem involving the zero-norm in objective is thoroughly studied on both theoretical and computational aspects. We consider a common DC approximation of zero-norm that includes all standard sparse inducing penalty functions, and develop general DCA schemes that cover all standard algorithms in the field. Next, the thesis turns to the nonnegative matrix factorization (NMF) problem. We investigate the structure of the considered problem and provide appropriate DCA based algorithms. To enhance the performance of NMF, the sparse NMF formulations are proposed. Continuing this topic, we study the dictionary learning problem where sparse representation plays a crucial role. In the second part, we exploit robust optimization technique to deal with data uncertainty for two important problems in machine learning: feature selection in linear Support Vector Machines and clustering. In this context, individual data point is uncertain but varies in a bounded uncertainty set. Different models (box/spherical/ellipsoidal) related to uncertain data are studied. DCA based algorithms are developed to solve the robust problems
Nemer, Fadia. "Optimisation de l'estimation du WCET par analyse inter-tâche du cache d'intructions". Toulouse 3, 2008. http://thesesups.ups-tlse.fr/188/.
Testo completoThe main characteristic of hard real-time systems is that they must guarantee a correct timing behaviour. Schedulability analysis methods are commonly used in hard real-time systems to check whether or not all tasks deadlines will be met. Most of them rely on the knowledge of an upper bound on the computation time of every task, named WCET (Worst-Case Execution Time). The WCET of a program can be computed by simulation or by performing a static analysis. Dynamic analyses give the actual WCET of a program if we can simulate all possible combinations of input data values and initial system states. Which is clearly impractical due to the exponentially number of simulations required. As a result, we compute an estimate of the actual WCET by performing a static analysis of the program despite of the pessimism generated by the approximations. Most of these analyses are performed at the task so they don't take advantage from the features of the multi-tasking real-time systems such as the tasks' chaining that affects straightforwardly the accuracy of the WCET estimation. We propose an approach that studies the instruction cache behavior of a static tasks scheduling for a single processor multi-tasking real-time application assuming that no preemption is allowed between and inside the tasks. The main goal consists of replacing the conservative approximations that consider an empty or undefined cache state before the task execution, by an abstract cache state. The WCET estimation is thus improved. We also present a free real-time benchmark, PapaBench. This benchmark is designed to be valuable for experimental works in WCET computation and may be also useful for scheduling analysis
Varin, Thibault. "Développement, évaluation et utilisation de méthodes de fouille de données (classifications, pharmacophores, motifs émergents et modéles par homologie de séquence) pour le screening virtuel : application aux ligands 5-HT". Caen, 2009. http://www.theses.fr/2009CAEN4056.
Testo completoOur laboratory has developed from many years a serotoninergic chemolibrary (ATBI program). This chemolibrary contains more than 1500 compounds tested toward the most recently discovered receptors: 5-HT4R, 5-HT5R, 5-HT6R and 5-HT7R. We report here several works carried out in the context of ATBI datasets analysis. After a brief introduction we develop the most important biological aspect of serotoninergic system (chapter II). In chapter III, we deal with evaluation and determination of optimal clustering protocols in relation with our internal chemolibrary. Because a good clustering classification for two of our ATBI datasets is really an issue (5-HT6 and 5-HT7) and in order to understand the reasons, we have developed a new method to extract 2D topological pharmacophores using emerging patterns (chapter IV) and built homology models to study binding mode of 5-HT6 ligands (chapter V). Finally we show how a single amino acid (F7. 38) can explain interspecies (human/rat) selectivity ligands of 5-HT7Rs using homology modelling and site-directed mutagenesis (chapter V)
Le, Beux Sébastien. "Un flot de conception pour applications de traitement du signal systématique implémentées sur FPGA à base d'Ingénierie Dirigée par les Modèles". Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2007. http://tel.archives-ouvertes.fr/tel-00322195.
Testo completoLa seconde contribution est le développement d'un flot de compilation permettant la transformation d'une application modélisée à haut niveau d'abstraction (UML) vers un modèle RTL. En fonction des contraintes de surfaces disponibles (technologie FPGA), le flot de conception optimise le déroulement des boucles et le placement des tâches. Le code VHDL produit est directement simulable et synthétisable sur FPGA. À partir d'applications modélisées en UML, nous produisons automatiquement un code VHDL.
Le flot de conception proposé a été utilisé avec succès dans le cadre de sécurité automobile ; un algorithme de détection d'obstacles a été automatiquement généré depuis sa spécification UML.
Labatut, Patrick. "Partition de complexes guidés par les données pour la reconstruction de surface". Phd thesis, Université Paris-Diderot - Paris VII, 2009. http://tel.archives-ouvertes.fr/tel-00844020.
Testo completoHaddon, Antoine. "Mathematical Modeling and Optimization for Biogas Production". Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTS047.
Testo completoAnaerobic digestion is a biological process in which organic compounds are degraded by different microbial populations into biogas (carbon dioxyde and methane), which can be used as a renewable energy source. This thesis works towards developing control strategies and bioreactor designs that maximize biogas production.The first part focuses on the optimal control problem of maximizing biogas production in a chemostat in several directions. We consider the single reaction model and the dilution rate is the controlled variable.For the finite horizon problem, we study feedback controllers similar to those used in practice and consisting in driving the reactor towards a given substrate level and maintaining it there. Our approach relies on establishing bounds of the unknown value function by considering different rewards for which the optimal solution has an explicit optimal feedback that is time-independent. In particular, this technique provides explicit bounds on the sub-optimality of the studied controllers for a broad class of substrate and biomass dependent growth rate functions. With numerical simulations, we show that the choice of the best feedback depends on the time horizon and initial condition.Next, we consider the problem over an infinite horizon, for averaged and discounted rewards. We show that, when the discount rate goes to 0, the value function of the discounted problem converges and that the limit is equal to the value function for the averaged reward. We identify a set of optimal solutions for the limit and averaged problems as the controls that drive the system towards a state that maximizes the biogas flow rate on an special invariant set.We then return to the problem over a fixed finite horizon and with the Pontryagin Maximum Principle, we show that the optimal control has a bang singular arc structure. We construct a one parameter family of extremal controls that depend on the constant value of the Hamiltonian. Using the Hamilton-Jacobi-Bellman equation, we identify the optimal control as the extremal associated with the value of the Hamiltonian which satisfies a fixed point equation. We then propose a numerical algorithm to compute the optimal control by solving this fixed point equation. We illustrate this method with the two major types of growth functions of Monod and Haldane.In the second part, we investigate the impact of mixing the reacting medium on biogas production. For this we introduce a model of a pilot scale upflow fixed bed bioreactor that offers a representation of spatial features. This model takes advantage of reactor geometry to reduce the spatial dimension of the section containing the fixed bed and in other sections, we consider the 3D steady-state Navier-Stokes equations for the fluid dynamics. To represent the biological activity, we use a 2 step model and for the substrates, advection-diffusion-reaction equations. We only consider the biomasses that are attached in the fixed bed section and we model their growth with a density dependent function. We show that this model can reproduce the spatial gradient of experimental data and helps to better understand the internal dynamics of the reactor. In particular, numerical simulations indicate that with less mixing, the reactor is more efficient, removing more organic matter and producing more biogas
Jaisson, Pascal. "Systèmes complexes gouvernés par des flux : schémas de volumes finis hybrides et optimisation numérique". Phd thesis, Ecole Centrale Paris, 2006. http://tel.archives-ouvertes.fr/tel-00468203.
Testo completoBelghiti, Moulay Tayeb. "Modélisation et techniques d'optimisation en bio-informatique et fouille de données". Thesis, Rouen, INSA, 2008. http://www.theses.fr/2008ISAM0002.
Testo completoThis Ph.D. thesis is particularly intended to treat two types of problems : clustering and the multiple alignment of sequence. Our objective is to solve efficiently these global problems and to test DC Programming approach and DCA on real datasets. The thesis is divided into three parts : the first part is devoted to the new approaches of nonconvex optimization-global optimization. We present it a study in depth of the algorithm which is used in this thesis, namely the programming DC and the algorithm DC ( DCA). In the second part, we will model the problem clustering in three nonconvex subproblems. The first two subproblems are distinguished compared to the choice from the norm used, (clustering via norm 1 and 2). The third subproblem uses the method of the kernel, (clustering via the method of the kernel). The third part will be devoted to bioinformatics, one goes this focused on the modeling and the resolution of two subproblems : the multiple alignment of sequence and the alignment of sequence of RNA. All the chapters except the first end in numerical tests
Glitia, Calin. "Optimisation des applications de traitement systématique intensives sur Systems-on-Chip". Electronic Thesis or Diss., Lille 1, 2009. http://www.theses.fr/2009LIL10070.
Testo completoIntensive signal processing applications appear in many application domains such as video processing or detection systems. These applications handle multidimensional data structures (mainly arrays) to deal with the various dimensions of the data (space, time, frequency). A specification language allowing the direct manipulation of these different dimensions with a high level of abstraction is a key to handling the complexity of these applications and to benefit from their massive potential parallelism. The Array-OL specification language is designed to do just that. In this thesis, we introduce an extension of Array-OL to express cycle dependences by the way of uniform inter-repetition dependences. We show that this specification language is able to express the main patterns of computation of the intensive signal processing domain. We discuss also the repetitive modeling of parallel applications, repetitive architectures and uniform mappings of the former to the latter, using the Array-OL concepts integrated into the Modeling and Analysis of Real-time and Embedded systems (MARTE) UML profile. High-level data-parallel transformations are available to adapt the application to the execution, allowing to choose the granularity of the flows and a simple expression of the mapping by tagging each repetition by its execution mode: data-parallel or sequential. The whole set of transformations was reviewed, extended and implemented as a part of the Gaspard2 co-design environment for embedded systems. With the introduction of the uniform dependences into the specification, our interest turns also on the interaction between these dependences and the high-level transformations. This is essential in order to enable the usage of the refactoring tools on the models with uniform dependences. Based on the high-level refactoring tools, strategies and heuristics can be designed to help explore the design space. We propose a strategy that allows to find good trade-offs in the usage of storage and computation resources, and in the parallelism (both task and data parallelism) exploitation, strategy illustrated on an industrial radar application
Dine, Abdelhamid. "Localisation et cartographie simultanées par optimisation de graphe sur architectures hétérogènes pour l’embarqué". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS303/document.
Testo completoSimultaneous Localization And Mapping is the process that allows a robot to build a map of an unknown environment while at the same time it determines the robot position on this map.In this work, we are interested in graph-based SLAM method. This method uses a graph to represent and solve the SLAM problem. A graph optimization consists in finding a graph configuration (trajectory and map) that better matches the constraints introduced by the sensors measurements. Graph optimization is characterized by a high computational complexity that requires high computational and memory resources, particularly to explore large areas. This limits the use of graph-based SLAM in real-time embedded systems. This thesis contributes to the reduction of the graph-based computational complexity. Our approach is based on two complementary axes: data representation in memory and implementation on embedded heterogeneous architectures. In the first axis, we propose an incremental data structure to efficiently represent and then optimize the graph. In the second axis, we explore the use of the recent heterogeneous architectures to speed up graph-based SLAM. We propose an efficient implementation model for embedded applications. We highlight the advantages and disadvantages of the evaluated architectures, namely GPU-based and FPGA-based System-On-Chips
Gilardet, Mathieu. "Étude d'algorithmes de restauration d'images sismiques par optimisation de forme non linéaire et application à la reconstruction sédimentaire". Phd thesis, Université de Pau et des Pays de l'Adour, 2013. http://tel.archives-ouvertes.fr/tel-00952964.
Testo completoMosnier, David. "Optimisation robuste multi-critères des pneumatiques en préconception". Electronic Thesis or Diss., Ecully, Ecole centrale de Lyon, 2011. http://www.theses.fr/2011ECDL0028.
Testo completoA global framework has been introduced in or der to deal with pre-design steps. The proposed workflow has been applied on the design of tire dimensions to fit new vehicles specifications. Indeed, industrial design lead time has been progressively reduced in order to bring new products faster to market. Nevertheless, environmental regulations become more and more stringent and the importance given to energetic efficiency during new products design phases increase From day to day. Therefore, design pro cess has to be reconsidered in order to be able to design quickly products which have to fulfil new specifications compared to their former version. The proposed framework uses a genetic algorithm in order to solve multi-objective optimisation problems. Since model calls are computationally expensive, it has been necessary to use surrogate modelling in order to fast en objective evaluations. Then data mining tools using self organizing maps have been deployed in order to cluster solutions. It allows providing the designer a limited number of typing, which are easier to behold. Eventually, uncertainties have been considered during optimization thanks to constraints addition. The proposed framework has been employed on tire dimensions optimization while designing a new vehicle. Therefore design parameters as well as objective functions that have to be taken into account have been introduced before developing corresponding models. Furthermore surrogate modelling is integrated in the optimization pro cess in order to accelerate objective functions evaluations while keeping a good accuracy. The automatic clustering workflow is then a real design support tool. A last step of constrained optimization can be fulfilled so as to refine the proposed solutions. Numerous problems with diverse specifications have been solved with the multi-criteria optimization framework presented. It has been shown that this framework is also able to deal with uncertainties and ensures limited influence on optimal solutions performances. Several examples have been treated as a validation. lndeed, tire dimensions designs have been conducted for several specifications which belonged to different vehicles with diverse properties. Indeed tire dimensions have been proposed for a vintage vehicle as well as electric, hybrid or down-sized vehicles, which would emerge in the upcoming decade