Дисертації з теми "Optimisation sur modèles de substitution"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Optimisation sur modèles de substitution".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Cordonnier, Laurie. "Optimisation des traitements de substitution aux opiacés par action conjointe sur les systèmes dopaminergique et opioïde chez la souris." Paris 5, 2006. http://www.theses.fr/2006PA05P615.
Amisulpride (D2/D3 dopamine receptor antagonist) and RB101 (mixed inhibitor of enkephalins catabolism) have been associated in this study in order to constitute a novel approach in heroin substitution treatments. After a behavioral study in non-dependent mice, which revealed a potentiation of RB101-induced effects following a chronic treatment with amisulpride, the sustaining neurochemical mechanisms had been investigated thanks to the use of preproenkephalin gene knockout mice and an in situ hybridization technique. The enkephalins role in the control of emotional responses through their action on opioid receptors had then been studied. Finally, the association amisulpride-RB101 had been used to block the expression of morphine-induced behavioral sensitization. These results were compared to classical substitution treatments, buprenorphine and methadone
Chamaret, Damien. "Plate-forme de réalité virtuelle pour l'étude de l'accessibilité et de l'extraction de lampes sur prototype virtuel automobile." Phd thesis, Université d'Angers, 2010. http://tel.archives-ouvertes.fr/tel-00540899.
Berbecea, Alexandru. "Approches multi-niveaux pour la conception systémique optimale des chaînes de traction ferroviaire." Phd thesis, Ecole Centrale de Lille, 2012. http://tel.archives-ouvertes.fr/tel-00917657.
Baudoui, Vincent. "Optimisation robuste multiobjectifs par modèles de substitution." Phd thesis, Toulouse, ISAE, 2012. http://tel.archives-ouvertes.fr/tel-00742023.
Saves, Paul. "High dimensional multidisciplinary design optimization for eco-design aircraft." Electronic Thesis or Diss., Toulouse, ISAE, 2024. http://www.theses.fr/2024ESAE0002.
Nowadays, there has been significant and growing interest in improving the efficiency of vehicle design processes through the development of tools and techniques in the field of multidisciplinary design optimization (MDO). In fact, when optimizing both the aerodynamics and structures, one needs to consider the effect of the aerodynamic shape variables and structural sizing variables on the weight which also affects the fuel consumption. MDO arises as a powerful tool that can perform this trade-off automatically. The objective of the Ph. D project is to propose an efficient approach for solving an aero-structural wing optimization process at the conceptual design level. The latter is formulated as a constrained optimization problem that involves a large number of design variables (typically 700 variables). The targeted optimization approach is based on a sequential enrichment (typically efficient global optimization (EGO)), using an adaptive surrogate model. Kriging surrogate models are one of the most widely used in engineering problems to substitute time-consuming high fidelity models. EGO is a heuristic method, designed for the solution of global optimization problems that has performed well in terms of quality of the solution computed. However, like any other method for global optimization, EGO suffers from the curse of dimensionality, meaning that its performance is satisfactory on lower dimensional problems, but deteriorates as the dimensionality of the optimization search space increases. For realistic aircraft wing design problems, the typical size of the design variables exceeds 700 and, thus, trying to solve directly the problems using EGO is ruled out. In practical test cases, high dimensional MDO problems may possess a lower intrinsic dimensionality, which can be exploited for optimization. In this context, a feature mapping can then be used to map the original high dimensional design variable onto a sufficiently small design space. Most of the existing approaches in the literature use random linear mapping to reduce the dimension, sometimes active learning is used to build this linear embedding. Generalizations to non-linear subspaces are also proposed using the so-called variational autoencoder. For instance, a composition of Gaussian processes (GP), referred as deep GP, can be very useful. In this PhD thesis, we will investigate efficient parameterization tools to significantly reduce the number of design variables by using active learning technics. An extension of the method could be also proposed to handle mixed continuous and categorical inputs using some previous works on low dimensional problems. Practical implementations within the OpenMDAO framework (an open source MDO framework developed by NASA) are expected
Liu, Zhen. "Modèles d'exécutions parallèles sur des systèmes multiprocesseurs : analyse et optimisation." Paris 11, 1989. http://www.theses.fr/1989PA112011.
The main concerns of this thesis are the modeling, analysis and optimization problems arising in multiprocessor systems with concurrent tasks. Multiprocessor systems are modeled by a set of processors connected by an interconnection network, parallel programs by directed acyclic graphs. Both exact and approximate methods are proposed for various parallel processing models. The performance measures such as program response time, system throughput, and stability condition, etc. . . , are analyzed. Scheduling algorithm that minimize makespan are also considered. New heuristics are provided together with simple illustrative examples. Besides theoretical studies, the performance evaluation software package SPEC (Software package for Performances Evaluation of Concurrent systems), designed and implemented by the author, is described concisely. This software package contains analytical and simulation tools
Bompard, Manuel. "MODÈLES DE SUBSTITUTION POUR L'OPTIMISATION GLOBALE DE FORME EN AÉRODYNAMIQUE ET MÉTHODE LOCALE SANS PARAMÉTRISATION." Phd thesis, Université Nice Sophia Antipolis, 2011. http://tel.archives-ouvertes.fr/tel-00771799.
Jung, Matthieu. "Evolution du VIH : méthodes, modèles et algorithmes." Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20052/document.
Nucleotide sequences data enable the inference of phylogenetic trees, or phylogenies, describing their evolutionary re-lationships during evolution. Combining these sequences with their sampling date or country of origin, allows inferring the temporal or spatial localization of their common ancestors. These data and methods are widely used with viral sequences, and particularly with human immunodeficiency virus (HIV), to trace the viral epidemic history over time and throughout the globe. Using sequences sampled at different points in time (or heterochronous) is also a mean to estimate their substitution rate, which characterizes the speed of evolution. The most commonly used methods to achieve these tasks are accurate, but are computationally heavy since they are based on complex models, and can only handle few hundreds of sequences. With an increasing number of sequences avail-able in the databases, often several thousand for a given study, the development of fast and accurate methods becomes essential. Here, we present a new distance-based method, named Ultrametric Least Squares, which is based on the princi-ple of least squares (very popular in phylogenetics) to estimate the substitution rate of a set of heterochronous sequences and the dates of their most recent common ancestors. We demonstrate that the criterion to be optimized is piecewise parabolic, and provide an efficient algorithm to find the global minimum.Using sequences sampled at different locations also helps to trace transmission chains of an epidemic. In this respect, we used all available sequences (~3,500) of HIV-1 subtype C, responsible for nearly 50% of global HIV-1 infections, to estimate its major migratory flows on a worldwide scale and its geographic origin. Innovative tools, based on the principle of parsimony, combined with several statistical criteria were used to synthesize and interpret information in a large phylogeny representing all the studied sequences. Finally, the temporal and geographical origins of the HIV-1 subtype C in Senegal were further explored and more specifically for men who have sex with men
Glitia, Calin. "Optimisation des applications de traitement systématique intensives sur Systems-on-Chip." Electronic Thesis or Diss., Lille 1, 2009. http://www.theses.fr/2009LIL10070.
Intensive signal processing applications appear in many application domains such as video processing or detection systems. These applications handle multidimensional data structures (mainly arrays) to deal with the various dimensions of the data (space, time, frequency). A specification language allowing the direct manipulation of these different dimensions with a high level of abstraction is a key to handling the complexity of these applications and to benefit from their massive potential parallelism. The Array-OL specification language is designed to do just that. In this thesis, we introduce an extension of Array-OL to express cycle dependences by the way of uniform inter-repetition dependences. We show that this specification language is able to express the main patterns of computation of the intensive signal processing domain. We discuss also the repetitive modeling of parallel applications, repetitive architectures and uniform mappings of the former to the latter, using the Array-OL concepts integrated into the Modeling and Analysis of Real-time and Embedded systems (MARTE) UML profile. High-level data-parallel transformations are available to adapt the application to the execution, allowing to choose the granularity of the flows and a simple expression of the mapping by tagging each repetition by its execution mode: data-parallel or sequential. The whole set of transformations was reviewed, extended and implemented as a part of the Gaspard2 co-design environment for embedded systems. With the introduction of the uniform dependences into the specification, our interest turns also on the interaction between these dependences and the high-level transformations. This is essential in order to enable the usage of the refactoring tools on the models with uniform dependences. Based on the high-level refactoring tools, strategies and heuristics can be designed to help explore the design space. We propose a strategy that allows to find good trade-offs in the usage of storage and computation resources, and in the parallelism (both task and data parallelism) exploitation, strategy illustrated on an industrial radar application
Sbeity, Hoda. "Optimisation sur un modèle de comportement pour la thérapie en oncologie." Versailles-St Quentin en Yvelines, 2013. http://www.theses.fr/2013VERS0038.
Solid tumors in humans are believed to be caused by a sequence of genetic abnormalities that arise in both normal and premalignant cells. Understanding these sequences is important for improving cancer treatments, which can differ from chemotherapy to radiotherapy and surgery. In the past 15 years, molecular biologists and geneticists have uncovered some of the most basic mechanisms by which normal stem cells in certain tissues develop into cancerous tumors. This biological knowledge serves as a basis for various models of carcinogenesis. These biological theories can then be transformed into mathematical models, supported by relevant methods of statistical data analysis. Mathematical models will allow the novel biological findings to be quantitatively tested against human data, helping researchers develop efficient diagnostic, controlling, curative and preventive strategies for cancer. These models belong to different categories, including deterministic, state space, compartment and stochastic models. In this thesis, for the deterministic models, we summarize the mathematical models used to describe the evolution of cancer cells, as well as those used for drug delivery in chemotherapy. However, chemotherapy is a complex treatment mode that requires balancing the benefits of treating tumors with the adverse toxic side effects caused by the anti-cancer drugs. In reality, observations in biological cases are often presented in a fuzzy way. For this reason, we attempt to introduce probabilities, which are used in stochastic models. Among the various stochastic models that are able to describe biological processes, such as cancer, we have the following: the Moran Model, Wright-Fisher (WF) Model, Galton–Watson branching process (GWBP), Markov chain Processes, and Model of Moolgavkar, Venzon, and Knudson (MVK). With these models in mind, one of the goals of this thesis is to develop models to follow the evolution of this disease and simulate suitable chemotherapy treatments that cause the death of cancer. Some methods of computational optimization, genetic algorithms (GA) in particular, have proven useful in helping to strike the right balance. Another purpose of this thesis is to study how the GA optimization method can be used to follow the evolution of cancer and facilitate finding optimal chemotherapeutic treatments that cause the death of cancer cells with fewer side effects. All these ideas are summarize by generated our own strategy to optimize the treatment of chemotherapy using real protocols. Our strategy is defined as follows, i) The first step is to define the genre of cancer treated with his parameters, ii) The second step is to choose a real treatment protocol defined by the cancerologist, iii) The third step is to choose a deterministic / stochastic model that can describe the trajectory of cancer cells with / without treatment and iv) The last step is the application of the optimization method
Glitia, Calin. "Optimisation des applications de traitement systématique intensives sur Systems-on-Chip." Thesis, Lille 1, 2009. http://www.theses.fr/2009LIL10070/document.
Intensive signal processing applications appear in many application domains such as video processing or detection systems. These applications handle multidimensional data structures (mainly arrays) to deal with the various dimensions of the data (space, time, frequency). A specification language allowing the direct manipulation of these different dimensions with a high level of abstraction is a key to handling the complexity of these applications and to benefit from their massive potential parallelism. The Array-OL specification language is designed to do just that. In this thesis, we introduce an extension of Array-OL to express cycle dependences by the way of uniform inter-repetition dependences. We show that this specification language is able to express the main patterns of computation of the intensive signal processing domain. We discuss also the repetitive modeling of parallel applications, repetitive architectures and uniform mappings of the former to the latter, using the Array-OL concepts integrated into the Modeling and Analysis of Real-time and Embedded systems (MARTE) UML profile. High-level data-parallel transformations are available to adapt the application to the execution, allowing to choose the granularity of the flows and a simple expression of the mapping by tagging each repetition by its execution mode: data-parallel or sequential. The whole set of transformations was reviewed, extended and implemented as a part of the Gaspard2 co-design environment for embedded systems. With the introduction of the uniform dependences into the specification, our interest turns also on the interaction between these dependences and the high-level transformations. This is essential in order to enable the usage of the refactoring tools on the models with uniform dependences. Based on the high-level refactoring tools, strategies and heuristics can be designed to help explore the design space. We propose a strategy that allows to find good trade-offs in the usage of storage and computation resources, and in the parallelism (both task and data parallelism) exploitation, strategy illustrated on an industrial radar application
Capelle, Thomas. "Recherche sur des méthodes d'optimisation pour la mise en place de modèles intégrés de transport et usage des sols." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM008/document.
Land use and transportation integrated (LUTI) models aim at representing the complex interactions between land use and transportation offer and demand within a territory. They are principally used to assess different alternative planning scenarios, via the simulation of their tendentious impacts on patterns of land use and travel behaviour. Setting up a LUTI model requires the estimation of several types of parameters to reproduce as closely as possible, observations gathered on the studied area (socio-economic data, transport surveys, etc.). The vast majority of available calibration approaches are semi-automatic and estimate one subset of parameters at a time, without a global integrated estimation.In this work, we improve the calibration procedure of Tranus, one of the most widely used LUTI models, by developing tools for the automatic and simultaneous estimation of parameters. Among the improvements proposed we replace the inner loop estimation of endogenous parameters (know as shadow prices) by a proper optimisation procedure. To do so, we carefully inspect the mathematics and micro-economic theories involved in the computation of the various model equations. To propose an efficient optimisation solution, we decouple the entire optimisation problem into equivalent smaller problems. The validation of our optimisation algorithm is then performed in synthetic models were the optimal set of parameters is known.Second, in our goal to develop a fully integrated automatic calibration, we developed an integrated estimation scheme for the shadow prices and a subset of hard to calibrate parameters. The scheme is shown to outperform calibration quality achieved by the classical approach, even when carried out by experts. We also propose a sensitivity analysis to identify influent parameters, this is then coupled with an optimisation algorithm to improve the calibration on the selected parameters.Third, we challenge the classical viewpoint adopted by Tranus and various other LUTI models, that calibration should lead to model parameters for which the model output perfectly fits observed data. This may indeed cause the risk of producing overfitting (as for Tranus, by using too many shadow price parameters), which will in turn undermine the models’ predictive capabilities. We thus propose a model selection scheme that aims at achieving a good compromise between the complexity of the model (in our case, the number of shadow prices) and the goodness of fit of model outputs to observations.Our experiments show that at least two thirds of shadow prices may be dropped from the model while still giving a near perfect fit to observations.The contribution outlined above are demonstrated on Tranus models and data from two metropolitan areas, in the USA and Europe
Loger, Benoit. "Modèles d’optimisation basés sur les données pour la planification des opérations dans les Supply Chain industrielles." Electronic Thesis or Diss., Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2023. http://www.theses.fr/2023IMTA0389.
With the increasing complexity of supply chains, automated decision-support tools become necessary in order to apprehend the multiple sources of uncertainty that may impact them, while maintaining a high level of performance. To meet these objectives, managers rely more and more on approaches capable of improving the resilience of supply chains by proposing robust solutions that remain valid despite uncertainty, to guarantee both a quality of service and a control of the costs induced by the production, storage and transportation of goods. As data collection and analysis become central to define the strategy of companies, a proper usage of this information to characterize more precisely these uncertainties and their impact on operations is becoming a major challenge for optimizing modern production and distribution systems. This thesis addresses these new challenges by developing different mathematical optimization methods based on historical data, with the aim of proposing robust solutions to several supply and production planning problems. To validate the practical relevance of these new techniques, numerical experiments on various applications compare them with several other classical approachesfrom the literature. The results obtained demonstrate the value of these contributions, which offer comparable average performance while reducing their variability in an uncertain context. In particular, the solutions remain satisfactory when confronted with extreme scenarios, whose probability of occurrence is low. Finally, the computational time of the procedures developed remain competitive, making them suitable for industrial-scale applications
Aupetit, Sébastien. "Contributions aux Modèles de Markov Cachés : métaheuristiques d'apprentissage, nouveaux modèles et visualisation de dissimilarité." Phd thesis, Université François Rabelais - Tours, 2005. http://tel.archives-ouvertes.fr/tel-00168392.
de métaheuristiques biomimétiques classiques (les algorithmes génétiques, l'algorithme de fourmis artificielles API et l'optimisation par essaim particulaire) au problème de l'apprentissage de MMC. Dans la
deuxième partie, nous proposons un nouveau type de modèle de Markov caché, appelé modèle Markov caché à substitutions de symboles (MMCSS). Un MMCSS permet d'incorporer des connaissances a priori dans le processus d'apprentissage et de reconnaissance. Les premières expérimentations de ces modèles sur des images démontrent leur intérêt. Dans la troisième partie, nous proposons une nouvelle méthode de représentation de dissimilarité appelée matrice de scatterplots pseudo-euclidienne (MSPE), permettant de mieux comprendre les interactions entre des MMC. Cette MSPE est construite à partir
d'une technique que nous nommons analyse en composantes principales à noyau indéfini (ACPNI). Nous terminons par la présentation de la bibliothèque HMMTK, développée au cours de ce travail. Cette dernière intègre des mécanismes de parallélisation et les algorithmes développés au cours de la thèse.
Tinard, Violaine. "Modélisation et optimisation du casque de motocycliste sur critères biomécaniques : application au casque composite." Strasbourg, 2009. http://www.theses.fr/2009STRA6266.
Albarello, Nicolas. "Etudes comparatives basées sur les modèles en phase de conception d'architectures de systèmes." Phd thesis, Ecole Centrale Paris, 2012. http://tel.archives-ouvertes.fr/tel-00879858.
Papastratos, Stylianos. "Modélisation, simulation dynamique et optimisation d'un procédé de fermentation éthanolique basé sur un bioréacteur à membrane : Saccharomyces cerevisiae." Châtenay-Malabry, Ecole centrale de Paris, 1996. http://www.theses.fr/1996ECAP0539.
Nguyen, Phi-Hung. "Impacts des modèles de pertes sur l’optimisation sur cycle d’un ensemble convertisseur – machine synchrone : applications aux véhicules hybrides." Thesis, Cachan, Ecole normale supérieure, 2011. http://www.theses.fr/2011DENS0049/document.
Almost all studies of permanent magnet synchronous machines (PMSM) for for hybrid vehicle applications relate to their performances on a specific point of a driving cycle of the vehicle (the base point, the point at high speed or the most used point). However, these machines often operate at different torques and at different speeds. This thesis studies therefore PMSM performances in order to optimize during an entire driving cycle. In this thesis, the author contributed to develop models of torque, field weakening, copper losses and iron losses and methods of calculating these losses at no-load and at load for four MSAP (three concentrated flux machine and a surface mounted PMSM) and for three driving cycles (New Eurepean Driving Cycle, Artemis-Urban and Artemis-Road). An experimental validation of these models was realized on a test bench with two prototypes of MSAP. Then, the MSAP were sized for a minimization of average power losses during the cycle and of the RMS current at the base point. This combination is designed to increase the efficiency of the electrical machine and minimize the size of the associated voltage inverter. This problem of multi-objective optimization was performed using the genetic algorithm, Non-Dominated Sorting Genetic Algorithm (NSGA-II). Thus, a Pareto front of optimal solutions can be derived. The impacts of loss models (at no-load and at load) on the PMSM optimization during the cycle are studied and the interest of each model is presented. Models and calculation methods proposed in this thesis can be applied to all cycles, at different MSAP and for other applications
Duchaine, Florent. "Optimisation de Forme Multi-Objectif sur Machines Parallèles avecMéta-Modèles et Coupleurs. Application aux Chambres de Combustion Aéronautiques." Phd thesis, Institut National Polytechnique de Toulouse - INPT, 2007. http://tel.archives-ouvertes.fr/tel-00362811.
Le but de ces travaux de thèse est de fournir une méthodologie basée sur des considérations issues de l'optimisation multi-objectif pour développer un outil de conception automatisé qui intègre des codes de simulation numérique pour évaluer les configurations. En premier lieu, les études concernent l'automatisation des procédures de simulation en insistant sur les aspects de génération automatique de maillage. Ensuite, le problème des temps de restitution liés à l'utilisation conjointe de techniques d'optimisation et de codes de calcul coûteux en ressources informatiques est adressé en proposant un algorithme basé sur des méta-modèles. L'outil final est construit à partir d'un coupleur de codes parallèles, lui conférant ainsi des caractéristiques intéressantes de performance et de flexibilité. Finalement, après divers tests de validation et d'évaluation, une application sur une chambre de combustion industrielle montre les capacités de la méthode à identifier des configurations prometteuses.
Bouhaddou, Imane. "Vers une optimisation de la chaine logistique : proposition de modèles conceptuels basés sur le PLM (Product Lifecycle Management)." Thesis, Le Havre, 2015. http://www.theses.fr/2015LEHA0026/document.
AIt is recognized that competition is shifting from “firm versus firm” perspective to “supply chain versus supply chain” perspective. Therefore, the ability to optimize the supply chain is becoming the critical issue for companies to win the competitive advantage. Furthermore, all members of a given supply chain must work together to respond to the changes of market demand rapidly. In the actual context, enterprises not only must enhance their relationships with each others, but also need to integrate their business processes through product life cycle activities. This has led to the emergence of a collaborative product lifecycle management commonly known as PLM. The objective of this thesis is to define a methodological approach which answers to the following problematic: How can PLM contribute to supply chain optimization ? We adopt, in this thesis, a hybrid approach combining PLM and mathematical models to optimize decisions for simultaneous design of the product and its supply chain. We propose conceptual models to solve formally the compromise between PLM and mathematical models for supply chain optimization. Unlike traditional centralized approaches used to treat the problem of integrated design of the product and its supply chain which generate complex mathematical models, we adopt an approach combining centralized decisions while integrating the constraints of the different supply chain partners during the product design and decentralized decisions when it comes to locally optimize each supply chain partner. The decentralized approach reduces the complexity of solving mathematical models and allows the supply chain to respond quickly to the evolution of local conditions of each partner. PLM will assure the integration of the different supply chain partners. Indeed, the information centralization by the PLM enables to take into consideration the dependence between these partners, improving therefore local optimization results
Pétrault, Christine. "Optimisation du fonctionnement d'une régulation de climatisation sur véhicule ferroviaire - application aux remorques de T. G. V." Poitiers, 1998. http://www.theses.fr/1998POIT2253.
Canon, Louis-claude. "Outils et algorithmes pour gérer l'incertitude lors de l'ordonnancement d'application sur plateformes distribuées." Thesis, Nancy 1, 2010. http://www.theses.fr/2010NAN10097/document.
This thesis consists in revisiting traditional scheduling problematics in computational environments, and considering the adjunction of uncertainty in the models. We adopt here a wide definition of uncertainty that encompasses the intrinsic stochastic nature of some phenomena (e.g., processor failures that follow a Poissonian distribution) and the imperfection of model characteristics (e.g., inaccuracy of the costs in a model due to a bias in measurements). We also consider uncertainties that stem from indeterminations such as the user behaviors that are uncontrolled although being deterministic. Scheduling, in its general form, is the operation that assigns requests to resources in some specific way. In distributed environments, we are concerned by a workload (i.e., a set of tasks) that needs to be executed onto a computational platform (i.e., a set of processors). Therefore, our objective is to specify how tasks are mapped onto processors. Produced schedules can be evaluated through many different metrics (e.g., processing time of the workload, resource usage, etc) and finding an optimal schedule relatively to some metric constitutes a challenging issue. Probabilistic tools and multi-objectives optimization techniques are first proposed for tackling new metrics that arise from the uncertainty. In a second part, we study several uncertainty-related criteria such as the robustness (stability in presence of input variations) or the reliability (probability of success) of a schedule
Canon, Louis-claude. "Outils et algorithmes pour gérer l'incertitude lors de l'ordonnancement d'application sur plateformes distribuées." Electronic Thesis or Diss., Nancy 1, 2010. http://www.theses.fr/2010NAN10097.
This thesis consists in revisiting traditional scheduling problematics in computational environments, and considering the adjunction of uncertainty in the models. We adopt here a wide definition of uncertainty that encompasses the intrinsic stochastic nature of some phenomena (e.g., processor failures that follow a Poissonian distribution) and the imperfection of model characteristics (e.g., inaccuracy of the costs in a model due to a bias in measurements). We also consider uncertainties that stem from indeterminations such as the user behaviors that are uncontrolled although being deterministic. Scheduling, in its general form, is the operation that assigns requests to resources in some specific way. In distributed environments, we are concerned by a workload (i.e., a set of tasks) that needs to be executed onto a computational platform (i.e., a set of processors). Therefore, our objective is to specify how tasks are mapped onto processors. Produced schedules can be evaluated through many different metrics (e.g., processing time of the workload, resource usage, etc) and finding an optimal schedule relatively to some metric constitutes a challenging issue. Probabilistic tools and multi-objectives optimization techniques are first proposed for tackling new metrics that arise from the uncertainty. In a second part, we study several uncertainty-related criteria such as the robustness (stability in presence of input variations) or the reliability (probability of success) of a schedule
Duchaine, Florent. "Optimisation de forme multi-objectif sur machines parallèles avec méta-modèles et coupleurs : application aux chambres de combustion aéronautiques." Phd thesis, Toulouse, INPT, 2007. http://oatao.univ-toulouse.fr/7706/1/duchaine.pdf.
Langle, Frédéric. "Contribution à l'élaboration de design de substitution en similitude indirecte sur modèles réduits : application à l'étude du comportement en collision des absorbeurs axiaux." Valenciennes, 1997. https://ged.uphf.fr/nuxeo/site/esupversions/ada00313-2944-431c-9aa0-8bbd91c168be.
Loux, Cyril. "Modélisation du fonctionnement d’un nouveau type de mélangeur : simulation des écoulements, validation sur des systèmes modèles et optimisation du procédé." Strasbourg, 2011. http://www.theses.fr/2011STRA6039.
The blending of two polymers has been the subject of fairly extensive studies. This topic directly concerns the polymer processing industry which is constantly looking for new methods to obtain materials with improved properties. These material are generally obtained by liquid-liquid mixing and induced properties are dependent on the microstructure whose length scale is much smaller than the one associated with the macroscopic flow. The usual mixing equipments, such as internal mixers or extruders, mainly generate shear flows and their efficiency is limited whereas tools based on elongational flows are assumed to be more efficient for dispersive mixing. In this context, we developed a new type of mixer (later called RMX) based on a convergent/divergent flow unit which favours the elongational component of the flow. In this study, physical effects create by this new mixing device will be obtained in some experiments and numerical simulations for creeping flow of Newtonian - of low and high viscosity - shear-thinning and viscoelastic fluids. To characterize the mix, we used a method which allows to determine the characteristic size, shape and orientation of the microstructure by predicting the change in the local morphological measure due to this velocity or deformation gradient. This approach, called the micromixing analysis, which treats by the use of area tensor, some local characteristic morphological measure as a field variable and can incorporate additional physics like kinetics of break-up and coalescence of droplets. These methods have highlighted some fundamental mixing elements like fluid striation in viscoelastic cases
Martin, Hugo. "Optimisation multi-objectifs et élicitation de préférences fondées sur des modèles décisionnels dépendants du rang et des points de référence." Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS101.
This thesis work falls within the research field of algorithmic decision theory, which is defined at the junction of decision theory, artificial intelligence and operations research. This work focuses on the consideration of sophisticated behaviors in complex decision environments (multicriteria decision making, collective decision making and decision under risk and uncertainty). We first propose methods for multi-objective optimization on implicit sets when preferences are represented by rank-dependent models (Choquet integral, bipolar OWA, Cumulative Prospect Theory and bipolar Choquet integral). These methods are based on mathematical programming and discrete algorithmics approaches. Then, we present methods for the incremental parameter elicitation of rank-dependent model that take into account the presence of a reference point in the decision maker's preferences (bipolar OWA, Cumulative Prospect Theory, Choquet integral with capacities and bicapacities). Finally, we address the structural modification of solutions under constraints (cost, quality) in multiple reference point sorting methods. The different approaches proposed in this thesis have been tested and we present the obtained numerical results to illustrate their practical efficiency
Palyart-Lamarche, Marc. "Une approche basée sur les modèles pour le développement d'applications de simulation numérique haute-performance." Toulouse 3, 2012. http://thesesups.ups-tlse.fr/1990/.
The development and maintenance of high-performance scientific computing software is a complex task. This complexity results from the fact that software and hardware are tightly coupled. Furthermore current parallel programming approaches lack of accessibility and lead to a mixing of concerns within the source code. In this thesis we define an approach for the development of high-performance scientific computing software which relies on model-driven engineering. In order to reduce both duration and cost of migration phases toward new hardware architectures and also to focus on tasks with higher added value this approach called MDE4HPC defines a domain-specific modeling language. This language enables applied mathematicians to describe their numerical model in a both user-friendly and hardware independent way. The different concerns are separated thanks to the use of several models as well as several modeling viewpoints on these models. Depending on the targeted execution platforms, these abstract models are translated into executable implementations with model transformations that can be shared among several software developments. To evaluate the effectiveness of this approach we developed a tool called ArchiMDE. Using this tool we developed different numerical simulation software to validate the design choices made regarding the modeling language
Quirion, Sébastien. "Animation basée sur la physique : extrapolation de mouvements humains plausibles et réalistes par optimisation incrémentale." Thesis, Université Laval, 2010. http://www.theses.ulaval.ca/2010/27675/27675.pdf.
Turki, Sadok. "Impact du délai de livraison sur le niveau de stock : une approche basée sur la méthode IPA." Electronic Thesis or Diss., Metz, 2010. http://www.theses.fr/2010METZ029S.
The first part of our work, we study a manufacturing system composed of a machine, a buffer with infinite capacity and customer. We applied this to system a continuous flow model taking into account a constant delivery time. To evaluate the performance of our system, we relied on the method of infinitesimal perturbation analysis (IPA), we performed simulations using an algorithm based on this method to determine the optimal buffer level which minimizes the cost function. This total cost is the sum of inventory cost, backlog cost and transportation cost. In the second part of our work, we applied a discrete flow model to the same system studied in the first part. The infinitesimal perturbation analysis method (IPA) is also applied to this model to determine the optimal inventory level. Applying this method to a discrete flow model is an innovation. Indeed, as we know, there is no work applying the method to IPA models to discrete flow. In the last part, we considered a manufacturing system with a random delivery time between the costumer and the producer. Then we studied the impact of transportation costs on the optimal buffer level. The method of infinitesimal perturbation analysis is also applied for both types of models (discrete flows and continuous flows model)
Kucerova, Anna. "Identification des paramètres des modèles mécaniques non-linéaires en utilisant des méthodes basées sur intelligence artificielle." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2007. http://tel.archives-ouvertes.fr/tel-00256025.
Moréac, Erwan. "Optimisation de la consommation d’énergie et de la latence dans les réseaux sur puces." Thesis, Lorient, 2017. http://www.theses.fr/2017LORIS467/document.
Thanks to the technology’s shrinking, a considerable amount of memory and computing capacity can be embedded into a single chip. This improvement leads to an important increase of the bandwidth requirements, that becomes the bottleneck of chip performances in terms of computational power. Thus, designers proposed the Network-on-Chip (NoC) as an answer to this bandwidth challenge. However, the on-chip traffic growth allowed by the NoC causes a significant rise of the chip energy consumption, which leads to a temperature increase and a reliability reduction of the chip. The development of energy optimization techniques for NoC becomes necessary.The first part of this thesis is devoted to the study of NoCs power models in order to estimate accurately the consumption of each component. Then, we can identify which ones are the most power consuming. Hence, the first contribution of this thesis has been to improve the NoC power model by replacing the lilnk power model in a NoC simulator (Noxim) by a bit-accurate one (Noxim-XT). In this way, the simulator is able to consider Crosstalk effects, a physical phenomenon that increases links energy consumption. The second part of the thesis deals with NoC energy optimization techniques. Thus, our research of optimization techniques is focused on inter-router links since their energy contribution regarding the NoC dynamic energy is significant and the dynamic energy tends to stay prominent with the shrinking technology. We proposed two optimization techniques from the study of NoC links optimizations. These two techniques present different energy / latency compromises and a possible extension of this work could be the development of a transmission strategy in order to select the right technique according to the application requirements
Benyoucef, Abderrezak. "Optimisation du raffinage algérien en présence d'incertitudes sur les exportations à l'horizon 2030." Thesis, Dijon, 2011. http://www.theses.fr/2011DIJOE016.
The refining industry transforms the crude oil into various finished products (for energy purposes one not). In Algeria, the refining industry has to adapt itself to answer the evolution of the demand, in a context characterized by a high volatility of the oil markets. The linear programming methods are frequently used in this industry to optimize the production and the investment patterns of this industry. In this research, we take into account the random character of the domestic demand of petroleum products as well as the volatility of the export prices on the international markets. This leads to elaborate a stochastic linear programming model of the refining industry on the horizon 2030 by introducing changes on the demand and on the prices of the exported products. The future consumptions in Algeria are forecasted through an econometric model. The long-term equilibrium between oil prices and petroleum products on the international markets is also estimated through a cointegration approach. The comparison between the results of the deterministic model and the results of the stochastic model point out the significant impact of both the consumption variability and the price volatility on the refining industry
Nguyen, Phi-Hung. "Impacts des modèles de pertes sur l'optimisation sur cycle d'un ensemble convertisseur - machine synchrone : applications aux véhicules hybrides." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2011. http://tel.archives-ouvertes.fr/tel-00648727.
Jung, Matthieu. "Évolution du VIH : méthodes, modèles et algorithmes." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2012. http://tel.archives-ouvertes.fr/tel-00842785.
Turki, Sadok. "Impact du délai de livraison sur le niveau de stock : une approche basée sur la méthode IPA." Thesis, Metz, 2010. http://www.theses.fr/2010METZ029S/document.
The first part of our work, we study a manufacturing system composed of a machine, a buffer with infinite capacity and customer. We applied this to system a continuous flow model taking into account a constant delivery time. To evaluate the performance of our system, we relied on the method of infinitesimal perturbation analysis (IPA), we performed simulations using an algorithm based on this method to determine the optimal buffer level which minimizes the cost function. This total cost is the sum of inventory cost, backlog cost and transportation cost. In the second part of our work, we applied a discrete flow model to the same system studied in the first part. The infinitesimal perturbation analysis method (IPA) is also applied to this model to determine the optimal inventory level. Applying this method to a discrete flow model is an innovation. Indeed, as we know, there is no work applying the method to IPA models to discrete flow. In the last part, we considered a manufacturing system with a random delivery time between the costumer and the producer. Then we studied the impact of transportation costs on the optimal buffer level. The method of infinitesimal perturbation analysis is also applied for both types of models (discrete flows and continuous flows model)
Boutheiller, Nicolas. "Analyse et synthèse par optimisation basée sur l'algorithme génétique de filtres en guide d'ondes rectangulaire : Application à la conception de filtres multi-modes utilisant le résonance des modes à leur fréquence de coupure." Bordeaux 1, 2002. http://www.theses.fr/2002BOR12542.
Dragoni, Laurent. "Tri de potentiels d'action sur des données neurophysiologiques massives : stratégie d’ensemble actif par fenêtre glissante pour l’estimation de modèles convolutionnels en grande dimension." Thesis, Université Côte d'Azur, 2022. http://www.theses.fr/2022COAZ4016.
In the nervous system, cells called neurons are specialized in the communication of information. Through the generation and propagation of electrical currents named action potentials, neurons are able to transmit information in the body. Given the importance of the neurons, in order to better understand the functioning of the nervous system, a wide range of methods have been proposed for studying those cells. In this thesis, we focus on the analysis of signals which have been recorded by electrodes, and more specifically, tetrodes and multi-electrode arrays (MEA). Since those devices usually record the activity of a set of neurons, the recorded signals are often a mixture of the activity of several neurons. In order to gain more knowledge from this type of data, a crucial pre-processing step called spike sorting is required to separate the activity of each neuron. Nowadays, the general procedure for spike sorting consists in a three steps procedure: thresholding, feature extraction and clustering. Unfortunately this methodology requires a large number of manual operations. Moreover, it becomes even more difficult when treating massive volumes of data, especially MEA recordings which also tend to feature more neuronal synchronizations. In this thesis, we present a spike sorting strategy allowing the analysis of large volumes of data and which requires few manual operations. This strategy makes use of a convolutional model which aims at breaking down the recorded signals as temporal convolutions between two factors: neuron activations and action potential shapes. The estimation of these two factors is usually treated through alternative optimization. Being the most difficult task, we only focus here on the estimation of the activations, assuming that the action potential shapes are known. Estimating the activations is traditionally referred to convolutional sparse coding. The well-known Lasso estimator features interesting mathematical properties for the resolution of such problem. However its computation remains challenging on high dimensional problems. We propose an algorithm based of the working set strategy in order to compute efficiently the Lasso. This algorithm takes advantage of the particular structure of the problem, derived from biological properties, by using temporal sliding windows, allowing it to scale in high dimension. Furthermore, we adapt theoretical results about the Lasso to show that, under reasonable assumptions, our estimator recovers the support of the true activation vector with high probability. We also propose models for both the spatial distribution and activation times of the neurons which allow us to quantify the size of our problem and deduce the theoretical complexity of our algorithm. In particular, we obtain a quasi-linear complexity with respect to the size of the recorded signal. Finally we present numerical results illustrating both the theoretical results and the performances of our approach
Kaloun, Adham. "Conception de chaînes de traction hybrides et électriques par optimisation sur cycles routiers." Thesis, Ecole centrale de Lille, 2020. http://www.theses.fr/2020ECLI0019.
Designing hybrid powertrains is a complex task, which calls for experts from various fields. In addition to this, finding the optimal solution requires a system overview. This can be, depending on the granularity of the models at the component level, highly time-consuming. This is even more true when the system’s performance is determined by its control, as it is the case of the hybrid powertrain. In fact, various possibilities can be selected to deliver the required torque to the wheels during the driving cycle. Hence, the main obstacle is to achieve optimality while keeping the methodology fast and robust. In this work, novel approaches to exploit the full potential of hybridization are proposed and compared. The first strategy is a bi-level approach consisting of two nested optimization blocks: an external design optimization process that calculates the best fuel consumption value at each iteration, found through control optimization using an improved version of dynamic programming. Two different systemic design strategies based on the iterative scheme are proposed as well. The first approach is based on model reduction while the second approach relies on precise cycle reduction techniques. The latter enables the use of high precision models without penalizing the calculation time. A co-optimization approach is implemented afterwards which adjusts both the design variables and parameters of a new efficient rule-based strategy. This allows for faster optimization as opposed to an all-at-once approach. Finally, a meta-model based technique is explored
Belkaï, Emilie. "Etude comparative des effets des traitements de substitution à l'héroïne : méthadone et buprénorphine haut dosage sur les régulations transcriptomiques induites par la morphine chez le rat." Paris 5, 2010. http://www.theses.fr/2010PA05P622.
Currently two heroin maintenance treatments are used in France: methadone and buprenorphine. However, the understanding of the biological mechanisms induced by these treatments is still, limiting the available therapeutic strategies. Using real time quantitative PCR, we studied in rats, the genomic impact of these maintenance treatment in the brain as compared to the reference opioid, morphine. Analysis of genomic and behavioral responses to acute injection of equipotent doses showed that buprenorphine induces a distinctive pharmacological and genomic profile. A second study using TaqMan ® Array technology gave us insights into the molecular impact of buprenorphine in the brain. Furthermore, a comparison study between buprenorphine and morphine on blood mononuclear cells has opened pathways to the identification of biomarkers in the periphery. These studies open the way to understanding the molecular impact of buprenorphine in the brain but also in peripheral tissue samples for non-invasive analyses, thus facilitating a transfer to clinical research to better understand the molecular effects long term of buprenorphine administered to patients
Kaloun, Adham. "Conception de chaînes de traction hybrides et électriques par optimisation sur cycles routiers." Thesis, Centrale Lille Institut, 2020. http://www.theses.fr/2020CLIL0019.
Designing hybrid powertrains is a complex task, which calls for experts from various fields. In addition to this, finding the optimal solution requires a system overview. This can be, depending on the granularity of the models at the component level, highly time-consuming. This is even more true when the system’s performance is determined by its control, as it is the case of the hybrid powertrain. In fact, various possibilities can be selected to deliver the required torque to the wheels during the driving cycle. Hence, the main obstacle is to achieve optimality while keeping the methodology fast and robust. In this work, novel approaches to exploit the full potential of hybridization are proposed and compared. The first strategy is a bi-level approach consisting of two nested optimization blocks: an external design optimization process that calculates the best fuel consumption value at each iteration, found through control optimization using an improved version of dynamic programming. Two different systemic design strategies based on the iterative scheme are proposed as well. The first approach is based on model reduction while the second approach relies on precise cycle reduction techniques. The latter enables the use of high precision models without penalizing the calculation time. A co-optimization approach is implemented afterwards which adjusts both the design variables and parameters of a new efficient rule-based strategy. This allows for faster optimization as opposed to an all-at-once approach. Finally, a meta-model based technique is explored
Fortin, Jean-Sébastien. "Optimisation des investissements sur les ponts par la méthode coûts-avantages : valorisation des revenus et du modèle de détérioration." Master's thesis, Université Laval, 2017. http://hdl.handle.net/20.500.11794/28153.
This study extends the existing literature on Bridge Management Systems (BMS) by developing a decision-making program to optimize bridge rehabilitations. This decision-making tool analyses the net present value to consider the optimal moment to repair a bridge. It highlights wealth creation by the maintenance of an efficient road network. Moreover, it allows the study of uncertainty on several parameters, such as financial values of inflation and interest rates as well as the evolution of traffic flow. The ability of the decision-making tool to verify the impact of several variables and the deterioration model currently used by the ministère des Transports, de la Mobilité durable et de l'Électrification des transports is compared to two other models; a Markovian model and a stochastic model developed under this study. This project breaks new ground by considering the revenue generated by the bridge’s efficiency. It also considers uncertainty on several parameters, such as financial values of inflation and interest rate, and the evolution of traffic flow. Considering the recent establishment of the management system used by the ministère des Transports, de la Mobilité durable et de l'Électrification des transports, this study is based on several assumptions. The life span of the bridge is limited to 100 years, degradation and repairs can only be done every 5 years, a single repair can be made over the bridge lifespan and the bridge condition is represented by only a few bridge components (elements). The study highlights the importance of considering variability on the deterioration of an element/bridge, interest rates and, to a lesser extent, inflation based on the ministère des Transports, de la Mobilité durable et de l'Électrification des transports data and using a probabilistic analysis of 20,000 simulations. Thus, when the bridge is only represented by its reinforced concrete deck and using the deterministic deterioration approach, a repair between 25 and 30 years is appropriate. A rather low interest rate can even push this choice to 35 years. This choice is very broad with the Markovian approach considering the high probabilities of keeping the bridge in good condition. Finally, the stochastic approach favors repair between 20 and 35 years depending on the speed of deterioration. This choice may again change slightly with the addition of both a variable interest rate and a variable inflation rate. When a reinforced concrete deck and steel beams are considered to represent the entire bridge, the deterministic approach suggests a 25-year repair for the reinforced concrete deck and a 30-year repair for the steel beams. Stochastic financial parameters can affect this choice, making an optimal repair of 25 to 35 years possible for both elements. The optimal moments of repair are very spread out for the Markovian approach considering the high probabilities of maintaining the elements in good condition. Finally, the stochastic approach proposes a repair between 20 and 35 years for the reinforced concrete deck and between 15 and 40 years for the steel beams. These repairs are slightly affected by the addition of a variable interest rate and inflation rate as well. An in-depth analysis shows the impact that several parameters have on the model considered. These parameters include: the transition matrix, the state penalty, the variability of the matrix for stochastic deterioration, and the addition of a simultaneous repair advantage. A change in the transition matrix mainly has an impact on the volatility of the results, whereas a modification on the state penalty shifts the optimal repair time distribution for Markovian and stochastic deteriorations. The variability of the matrix for stochastic deterioration directly affects the volatility of the optimal repair time. For example, the lower the percentage of variation of the matrix, the more the optimal repair moments will be concentrated (or fixed). Finally, the implementation of a simultaneous repair benefit mainly has an impact when the optimal repair time is within 10 years of a simultaneous repair. For a deterministic deterioration, a reduction in costs of 3.72% is sufficient to reconcile repair dates to 30 years, the bridge being repair at 25 years without this benefit. However, this advantage has little impact on Markovian deterioration due to the wide distribution of optimal repair times but a considerable impact on stochastic deterioration, with the majority of repairs occurring within a range of 15 to 40 years.
Cette étude a pour but de contribuer à l’avancement des connaissances dans le domaine des systèmes de gestion de ponts (Bridge Management System (BMS)) par le développement d’un outil décisionnel pour optimiser les réparations sur les ponts. Cet outil décisionnel se base sur la valeur actualisée nette pour considérer le moment optimal de réparation. Il met ainsi à l’avant-plan la création de richesse permise par un réseau routier efficace. De plus, il permet d’étudier l’incertitude sur plusieurs variables, soit les valeurs financières d’inflation et d’actualisation, ainsi que l’évolution du débit routier. La flexibilité de l’outil décisionnel permet de vérifier l’impact de plusieurs variables. Ainsi, le modèle de détérioration présentement utilisée par le ministère des Transports, de la Mobilité durable et de l'Électrification des transports du Québec est comparé à deux autres modèles, soit un modèle markovien basé sur la théorie des chaînes de Markov et un modèle stochastique développé dans le cadre de cette étude. Le projet innove en considérant les revenus générés par un pont et l’incertitude sur les variables futures de détérioration d’éléments de ponts, d’inflation et d’actualisation, ainsi que celles relatives à l’évolution du débit routier. Considérant la récente implantation du système de gestion du ministère des Transports, de la Mobilité durable et de l'Électrification des transports, cette étude se base sur plusieurs hypothèses. Pour cette étude, la durée de vie maximale du pont est établie à 100 ans, la dégradation et la réparation d’un ouvrage est analysée aux 5 ans et une seule réparation majeure peut être effectuée sur la durée de vie. De plus, cette réparation permet de remettre le pont dans son état initial (neuf) et la détérioration de quelques éléments principaux (la dalle en béton armé et les poutres d’acier) du pont représente la détérioration globale de la structure. En se basant sur les données du ministère des Transports, de la Mobilité durable et de l'Électrification des transports et à l’aide d’une analyse probabiliste de 20 000 simulations, l’étude met en évidence l’importance de considérer la variabilité sur la détérioration d’un élément/pont, sur le taux d’intérêt et dans une moindre mesure, l’inflation. Ainsi, lorsque seul l’état de la dalle représente l’état global du pont et en utilisant l’approche déterministe, une réparation entre 25 et 30 ans est appropriée. Un taux d’intérêt plutôt faible peut même repousser ce choix à 35 ans. Le choix de date optimale de réparation est très étalé avec l’approche markovienne considérant les probabilités élevées de maintien du pont en bon état. Finalement, l’approche stochastique favorise une réparation entre 20 et 35 ans selon la rapidité de la détérioration. Ce choix peut encore une fois changer légèrement avec l’ajout de taux d’intérêt et d’inflation variables. Lorsque seul l’état de la dalle et des poutres est considéré représenter l’état de l’ensemble du pont, l’approche déterministe propose une réparation à 25 ans pour le dalle en béton armé et une réparation à 30 ans pour les poutres en acier. Les paramètres financiers stochastiques peuvent affecter ce choix rendant possible une réparation optimale de 25 à 35 ans pour les deux types d’éléments. Les moments optimaux de réparation sont très étalés pour l’approche markovienne considérant les probabilités élevées de maintien des éléments en bon état. Finalement, l’approche stochastique propose une réparation entre 20 et 35 ans pour le dalle en béton armé et entre 15 et 40 ans pour les poutres en acier. Ces moments de réparations sont aussi affectés légèrement par l’ajout d’un taux d’intérêt et d’inflation variables. Une analyse de sensibilité permet de considérer l’impact de plusieurs paramètres du modèle considéré, soit la matrice de transition, la pénalité d’état, la variabilité de la matrice pour une détérioration stochastique et l’ajout d’un avantage de réparation simultanée à deux éléments. Une modification de la matrice de transition a surtout un impact sur la volatilité des résultats, alors qu’une modification sur la pénalité d’état crée une translation sur la distribution du moment optimal de réparation pour une détérioration de type markovienne et stochastique. La variabilité de la matrice pour une détérioration stochastique a directement un impact sur la volatilité du moment optimal de réparation. Plus le pourcentage de variation de la matrice est faible, plus les moments optimaux de réparation seront concentrés (plage moins étendue). Finalement, s’il est considéré que la réparation simultanée de deux éléments coûte moins cher que lorsque ces deux éléments sont réparés à des dates différentes (avantage de réparation simultanée de deux éléments plutôt que deux réparations distinctes), il y alors un impact sur le moment optimal de réparation. Cet effet est principalement perceptible lorsque les dates de réparation optimales sont séparées de moins de 10 ans. Pour une détérioration déterministe, il suffit que la réparation simultanée coûte de 3,72% de moins que deux réparations distinctes pour favoriser de réparer les deux éléments simultanément à 30 ans, la dalle étant réparée à 25 ans sans avantage (réduction des coût) de réparation simultanée. Cependant, un avantage de réparation simultanée a peu d’impact sur le moment optimal de réparation lorsque la détérioration se base sur un modèle markovien en raison de la grande répartition des moments optimaux de réparation. Enfin, l’avantage de réparation simultanée a un impact considérable pour une détérioration stochastique, la majorité des réparations se produisant entre 15 et 40 ans.
Cette étude a pour but de contribuer à l’avancement des connaissances dans le domaine des systèmes de gestion de ponts (Bridge Management System (BMS)) par le développement d’un outil décisionnel pour optimiser les réparations sur les ponts. Cet outil décisionnel se base sur la valeur actualisée nette pour considérer le moment optimal de réparation. Il met ainsi à l’avant-plan la création de richesse permise par un réseau routier efficace. De plus, il permet d’étudier l’incertitude sur plusieurs variables, soit les valeurs financières d’inflation et d’actualisation, ainsi que l’évolution du débit routier. La flexibilité de l’outil décisionnel permet de vérifier l’impact de plusieurs variables. Ainsi, le modèle de détérioration présentement utilisée par le ministère des Transports, de la Mobilité durable et de l'Électrification des transports du Québec est comparé à deux autres modèles, soit un modèle markovien basé sur la théorie des chaînes de Markov et un modèle stochastique développé dans le cadre de cette étude. Le projet innove en considérant les revenus générés par un pont et l’incertitude sur les variables futures de détérioration d’éléments de ponts, d’inflation et d’actualisation, ainsi que celles relatives à l’évolution du débit routier. Considérant la récente implantation du système de gestion du ministère des Transports, de la Mobilité durable et de l'Électrification des transports, cette étude se base sur plusieurs hypothèses. Pour cette étude, la durée de vie maximale du pont est établie à 100 ans, la dégradation et la réparation d’un ouvrage est analysée aux 5 ans et une seule réparation majeure peut être effectuée sur la durée de vie. De plus, cette réparation permet de remettre le pont dans son état initial (neuf) et la détérioration de quelques éléments principaux (la dalle en béton armé et les poutres d’acier) du pont représente la détérioration globale de la structure. En se basant sur les données du ministère des Transports, de la Mobilité durable et de l'Électrification des transports et à l’aide d’une analyse probabiliste de 20 000 simulations, l’étude met en évidence l’importance de considérer la variabilité sur la détérioration d’un élément/pont, sur le taux d’intérêt et dans une moindre mesure, l’inflation. Ainsi, lorsque seul l’état de la dalle représente l’état global du pont et en utilisant l’approche déterministe, une réparation entre 25 et 30 ans est appropriée. Un taux d’intérêt plutôt faible peut même repousser ce choix à 35 ans. Le choix de date optimale de réparation est très étalé avec l’approche markovienne considérant les probabilités élevées de maintien du pont en bon état. Finalement, l’approche stochastique favorise une réparation entre 20 et 35 ans selon la rapidité de la détérioration. Ce choix peut encore une fois changer légèrement avec l’ajout de taux d’intérêt et d’inflation variables. Lorsque seul l’état de la dalle et des poutres est considéré représenter l’état de l’ensemble du pont, l’approche déterministe propose une réparation à 25 ans pour le dalle en béton armé et une réparation à 30 ans pour les poutres en acier. Les paramètres financiers stochastiques peuvent affecter ce choix rendant possible une réparation optimale de 25 à 35 ans pour les deux types d’éléments. Les moments optimaux de réparation sont très étalés pour l’approche markovienne considérant les probabilités élevées de maintien des éléments en bon état. Finalement, l’approche stochastique propose une réparation entre 20 et 35 ans pour le dalle en béton armé et entre 15 et 40 ans pour les poutres en acier. Ces moments de réparations sont aussi affectés légèrement par l’ajout d’un taux d’intérêt et d’inflation variables. Une analyse de sensibilité permet de considérer l’impact de plusieurs paramètres du modèle considéré, soit la matrice de transition, la pénalité d’état, la variabilité de la matrice pour une détérioration stochastique et l’ajout d’un avantage de réparation simultanée à deux éléments. Une modification de la matrice de transition a surtout un impact sur la volatilité des résultats, alors qu’une modification sur la pénalité d’état crée une translation sur la distribution du moment optimal de réparation pour une détérioration de type markovienne et stochastique. La variabilité de la matrice pour une détérioration stochastique a directement un impact sur la volatilité du moment optimal de réparation. Plus le pourcentage de variation de la matrice est faible, plus les moments optimaux de réparation seront concentrés (plage moins étendue). Finalement, s’il est considéré que la réparation simultanée de deux éléments coûte moins cher que lorsque ces deux éléments sont réparés à des dates différentes (avantage de réparation simultanée de deux éléments plutôt que deux réparations distinctes), il y alors un impact sur le moment optimal de réparation. Cet effet est principalement perceptible lorsque les dates de réparation optimales sont séparées de moins de 10 ans. Pour une détérioration déterministe, il suffit que la réparation simultanée coûte de 3,72% de moins que deux réparations distinctes pour favoriser de réparer les deux éléments simultanément à 30 ans, la dalle étant réparée à 25 ans sans avantage (réduction des coût) de réparation simultanée. Cependant, un avantage de réparation simultanée a peu d’impact sur le moment optimal de réparation lorsque la détérioration se base sur un modèle markovien en raison de la grande répartition des moments optimaux de réparation. Enfin, l’avantage de réparation simultanée a un impact considérable pour une détérioration stochastique, la majorité des réparations se produisant entre 15 et 40 ans.
Sadet, Jérémy. "Surrogate models for the analysis of friction induced vibrations under uncertainty." Electronic Thesis or Diss., Valenciennes, Université Polytechnique Hauts-de-France, 2022. http://www.theses.fr/2022UPHF0014.
The automotive squeal is a noise disturbance, which has won the interest of the research and industrialists over the year. This elusive phenomenon, perceived by the vehicle purchasers as a poor-quality indicator, causes a cost which becomes more and more important for the car manufacturers, due to client’s claims. Thus, it is all the more important to propose and develop methods allowing predicting the occurring of this noise disturbance with efficiency, thanks to numerical simulations. Hence, this thesis proposes to pursue the recent works that showed the certain contributions of an integration of uncertainties into the squeal numerical simulations. The objective is to suggest a strategy of uncertainty propagation, for squeal simulations, maintaining numerical cost acceptable (especially, for pre-design phases). Several numerical methods are evaluated and improved to allow precise computations and with computational time compatible with the constraints of the industry. After positioning this thesis work with respect to the progress of the researchers working on the squeal subject, a new numerical method is proposed to improve the computation of the eigensolutions of a large quadratic eigenvalue problem. To reduce the numerical cost of such studies, three surrogate models (gaussian process, deep gaussian process and deep neural network) are studied and compared to suggest the optimal strategy in terms of methodology or model setting. The construction of the training set is a key aspect to insure the predictions of these surrogate models. A new optimisation strategy, hinging on bayesian optimisation, is proposed to efficiently target the samples of the training set, samples which are probably expensive to compute from a numerical point of view. These optimisation methods are then used to present a new uncertainty propagation method, relying on a fuzzy set modelisation
Fontchastagner, Julien. "Résolution du problème inverse de conception d'actionneurs électromagnétiques par association de méthodes déterministes d'optimisation globale avec des modèles analytiques et numériques." Phd thesis, Toulouse, INPT, 2007. https://hal.science/tel-02945546v1.
The work presented in this thesis brings a new methodology to solve the inverse problem of electromagnetic actuators design. After treating the general aspects of the problem, we will choose to solve it with deterministic methods of global optimization, which do not require any starting point, use each kind of variables and ensure to obtain the global optimum. Being hitherto used only with simple models, we apply them with analytical models based on an analytical resolution of the magnetic field, using less restrictive hypotheses. They are so more complex, and require we extend our optimization algorithm. A full finite elements software was then created, and equipped with a procedure which permit the evaluation of the average torque in the particular case of a magnet machine. The initial problem was reformulated and solve by integrating the numerical constraint of couple, the analytical model being used as guide with the new algorithm
Fontchastagner, Julien. "Résolution du problème inverse de conception d'actionneurs électromagnétiques par association de méthodes déterministes d'optimisation globale avec des modèles analytiques et numériques." Phd thesis, Toulouse, INPT, 2007. http://oatao.univ-toulouse.fr/7621/1/fontchastagner.pdf.
Kpoton, Agapit. "De la stéréochimie de la substitution nucléophile sur le silicium à la synthèse de silanes pentacoordonnés modèles : mise en évidence de la pseudorotation au niveau de l'atome de silicium." Montpellier 2, 1991. http://www.theses.fr/1991MON20120.
Labroche, Nicolas. "Modélisation du système de reconnaissance chimique des fourmis pour le problème de la classification non-supervisée : application à la mesure d'audience sur internet." Tours, 2003. http://www.theses.fr/2003TOUR4033.
In this thesis, we model the chemical recognition system of ants to develop a new unsupervised clustering method, applied to the web usage mining problem. The biological principles of the model allowed us on the one hand to develop an artificial life simulator able to reproduce real ants experiments and on the other hand to set the basis of our clustering algorithm AntClust and Visual AntClust. These algorithms associate one object to the genome of an artificial ant and simulate meetings between them. The gathering of artificial ants with similar odours in the same nest builds the expected partition of the objects. We associate Antclust to a multi-modal representation of the web sessions and an adapted similarity measure to help understanding the web users behaviours
Lefillastre, Paul. "Contribution au développement d'une nouvelle technologie d'optique ophtalmique pixellisée : étude et optimisation du report de films fonctionnalisés sur une surface courbe." Phd thesis, Université Paul Sabatier - Toulouse III, 2010. http://tel.archives-ouvertes.fr/tel-00807651.
Combes, François. "Optimisation de protocoles dans les modèles non linéaires à effets mixtes : prise en compte de la précision d'estimation des paramètres individuels et impact sur la détection de covariables en pharmacocinétique." Paris 7, 2014. http://www.theses.fr/2014PA077016.
Non-Linear Mixed-effect models (NLMEM) are widely used in pharmacometrics. Using MNLEM, individual parameters can be estimated using a Bayesian approach such as the maximum a posteriori. The design, that is to say the number of samples per subject and their timing, influence the precision of the individual estimates and their shrinkage, ie their regression to the mean. We implemented an approximation of the Bayesian Fisher information matrix (Mzi) using a first-order linearization of the model. From M5,,, we proposed a method to predict the shrinkage associated with a given design. We performed simulation studies based on two pharmacokinetics models to validate this prediction. Then, we explored the impact of the design on the power of tests to detect covariate effect on the model. The Likelihood Ratio Test (LRT) and the Correlation Test (CT) are the most usual tests in NLMEM. LRT requires estimating models parameters under all the assumptions considered. CT is faster because only based on individual estimates, but was considered influenced by the shrinkage. Through a simulation study, we explored the relationship between the shrinkage and the power of these tests. Results showed that for different designs and different covariate effect sizes, the two tests have exactly the same power. As a conclusion, because of its faster execution, we therefore advise to use the CT in the initial screening of relevant covariates and then build the final covariate model using the LRT
Buhry, Laure. "Estimation de paramètres de modèles de neurones biologiques sur une plate-forme de SNN (Spiking Neural Network) implantés "in silico"." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2010. http://tel.archives-ouvertes.fr/tel-00561396.