Teses / dissertações sobre o tema "Optimisation du processus"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Optimisation du processus".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.
Damaj, Rabih. "Inférence statistique sur le processus de Mino". Thesis, Lorient, 2015. http://www.theses.fr/2015LORIS369/document.
Texto completo da fonteThe subject of this PhD thesis is the statistical inference on Mino process that we define as a one-memory self-exciting point process which intensiy has a special form. We begin with a general description of self-exciting point processes and we present methods used to estimate the intensity parameters of these processes. We consider the special case of a one-memory self-exciting point process, used in signal processing. We call the process: the Mino process. This process can be interpreted as a renewal process which interarrival times that follow a special distribution that we study in details. In order to estimate the parameters of a Mino process intensity, we utilize the maximum likelihood method. We solve the likelihood equations with a Newton-Raphson algorithm. We show the efficiency of the method on simulated data. The convergence of the Newton-Raphson algorithm and, the existence and uniqueness of the maximun likelihood estimators are proved. Lastly, we construct a test of hypothesis to assess whether a point process is self-exciting or not
Djebali, Sonia. "Optimisation globale du processus d'usinage des surfaces gauches". Toulouse 3, 2014. http://thesesups.ups-tlse.fr/2496/.
Texto completo da fonteCurrently, free-form surfaces are used in various fields of activity, such as a manufacturing moulds or matrix used to make the models. The free form surfaces present a set of curved zones, i. E. Concave and convex zones, these surfaces require a high quality level and reduced shape defects. Their machining is long, not optimized and costly, to date: the machining of free form surfaces is done using the spherical tool with respecting roughness quality. The object of this study is to minimize the machining time of the free-form surfaces using toric end mill cutter and parallel planes milling strategy, with respecting roughness quality. The analytical expression of the machining time is unknown and by hypothesis, it is assumed to be proportional to the paths length crossed by the cutting tool. Those paths length depend on the feed direction and the maximal scallop height criteria must not be exceeded. To have an optimal feed direction at any point, the surface is divided into zones with low variation of the steepest slope direction. The optimization problem was formulated in order to minimize the global path length. Furthermore, a penalty reflecting the time loss due to the movement of the tool from one zone to another one is taken into account. The optimization problem is to minimize the path long trajectory crossed by the tool on the surface while respecting scallop height criteria. The optimization parameters are the number and the geometrical of the zones and the optimal feed direction in each zone. To resolve this problem, several optimization methods are used: combinatorial optimization and stochastic optimization. These methods are applied on a test surface and after simulations, the results obtained present a significant saving of paths length compared to the machining in one zone with spherical tool
Ben, Amor Ichrak. "Analyse et optimisation du processus de régénération des polyamides". Electronic Thesis or Diss., CY Cergy Paris Université, 2024. http://www.theses.fr/2024CYUN1268.
Texto completo da fonteThe aim of this thesis is to develop an optimized regeneration process for waste generated from injection molding. The study focuses on the impact of mechanical recycling on the mechanical and physical properties of blends of polyamide 6 (PA6) and polyamide 66 (PA66), changing the proportions of recycled material from 10% to 100% in increments of 10%. Six recycling cycles were conducted, and the properties of the blends were analyzed based on the recycled material content. Up to the sixth cycle, slight variations were observed in mechanical properties, except for elongation. Physical properties exhibited a trend of decreasing flexural strength and Young's modulus, with an increase in elongation. Microscopic analysis revealed that for PA6, porosity increased as the number of recycling cycles increased. The study also addresses the influence of hygrothermal aging on the studied polyamides, examining resulting changes in their mechanical properties. Following these notable modifications, a numerical simulation was performed to assess the impact of changes in recycled material characteristics on injection molding process parameters. The results indicate that several parameters, especially mold filling time and injected part cooling time, are influenced by material property changes. Defects such as air bubbles and weld lines were observed, emphasizing the importance of considering them in the manufacturing process. This study provides a comprehensive understanding of the effects of recycling on polyamide properties, as well as the practical implications of these changes in the context of injection molding.Keywords: Regeneration, Polyamide 6, Polyamide 66, Injection molding, Simulation
Glardon, Carmen. "Simulation et optimisation du processus constructif de rénovation d'immeubles /". [S.l.] : [s.n.], 1995. http://library.epfl.ch/theses/?nr=1355.
Texto completo da fonteLagresle, Charly. "Analyse du processus d’usure abrasive et optimisation d’engrenage aéronautique". Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEI113.
Texto completo da fontePowers transmissions are commonly used in many areas, including aeronautics. The studied system in this thesis is a helicopter gearbox. Its purpose is to transmit the power generated by the turbine engine to the main gearbox and to adapt the rotational speed of the input shaft. Aeronautical gears are light weight in order to maximize the power to weight ratio of the system. With thin rims, the mass of the system is reduced but its flexibility is increased. These types of gears, subjected to large number of revolutions and severe operating conditions, are more likely to be exposed to failures such as abrasive wear, scoring or micro-pitting. The first objective of this PhD thesis is the understanding and the simulation of the abrasive wear process for spur and helical gears. The material removal calculation is based on the well-known Archard equation, adapted to lubricated contacts. During the different phases of flight (take-off, landing, hover flight), the specific working conditions change. Consequently, the quasi-static gearing behaviour, the lubrication and therefore the quantity of wear need to be adapted. To this end, a new methodology is proposed to accumulate wear depths over several and different working conditions. This methodology makes it possible to analyze the kinetics and the intensity of the abrasive wear process and deduce the most severe phases of flight. The second goal echoes the first one. Following the identification of the problem of abrasive wear, a multi-objective optimization of the quasi-static behaviour of the thin rimmed gear is proposed. The goal of this optimization, based on the search of optimal tooth modifications, is to reduce potential sources of gear failures, in particular localized overpressures on tooth flanks or the Almen factor governing scuffing. Due to the complexity of the problem, a meta-heuristic multi-variable optimization algorithm (MO-TRIBES) is introduced. Multiple mono- and multi-objective gear optimization examples are provided in order to improve the quasi-static behaviour of the aeronautical gear : minimization of fluctuations of the transmission error under load, reduction of the maximal contact pressures, decrease of the scuffing risk factors. The ideal type of tooth modifications is discussed. Finally, by using the optimization module, the amount of wear is significantly reduced and a comfortable lifetime extension for the studied gear is provided
Brandy, Anthony. "optimisation d'un processus de conception par quantification de l'usage". Thesis, Paris, ENSAM, 2019. http://www.theses.fr/2019ENAM0018.
Texto completo da fonteThe medical devices make up a specific group beside classical products by their actions on the patients’ health. By this indication, the use of such devices potentially induces risk on the patients’ health especially in case of failure or misuse. These products may be used by a large target of users themselves or assisted. This diversity in terms of references and possible use, combined to the critical aspect of these products implies a significant taking into account of the user and usability during the product design. This is all the more important because as the medical devices form a very innovative industrial sector linked to the technological advances. These developments need to be adapted to the users who do not have always all the knowledge needed to their use. Nevertheless, while the framework (standards and regulations) emphasis on the need to take into account the user, this approach is little implemented by the companies which are, for a large part, SME’s with limited resources. At the same time, some weaknesses about the tools and methods in matter of use analysis are present. Moreover, this approach collides with barriers induced by design habits, multidisciplinary context and constraints related to the medical field. In the aim to give a realistic industrial answer to implement the user into the design process, a quantification tool of usability was developed. This tool draws on measurement methods and dedicated results presentation towards designers and General Public. From the experimentations, the proposed index appeared consistent against a gold standard index (SUS) and adds more details in the results (tasks, panel priority about usability components). These feedbacks showed a high understanding level of this tool and a significant interest by the designers but also the General Public. In addition, the experimentations highlighted the positive contribution of the proposed tool in the design team communication but also in the design choice by implementing usability approach including the upstream phases, especially in creativity
Cheickh, Obeid Nour Eldin. "Régulation de processus multivariables par optimisation d'un critère fuyant". Besançon, 1987. http://www.theses.fr/1988BESA2018.
Texto completo da fonteCheickh, Obeid Nour Eldin. "Régulation de processus multivariables par optimisation d'un critère fuyant". Grenoble 2 : ANRT, 1987. http://catalogue.bnf.fr/ark:/12148/cb37603879k.
Texto completo da fonteMezghani, Aïda. "Optimisation du calcul des dispersions angulaires tridimensionnelles". Phd thesis, Ecole Centrale Paris, 2010. http://tel.archives-ouvertes.fr/tel-00576363.
Texto completo da fonteLegault-Michaud, Ariane. "Optimisation du processus d'approvisionnement des clients américains chez Groupe Leclerc". Master's thesis, Université Laval, 2017. http://hdl.handle.net/20.500.11794/36678.
Texto completo da fonteContal, Emile. "Méthodes d’apprentissage statistique pour l’optimisation globale". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLN038/document.
Texto completo da fonteThis dissertation is dedicated to a rigorous analysis of sequential global optimization algorithms. We consider the stochastic bandit model where an agent aim at finding the input of a given system optimizing the output. The function which links the input to the output is not explicit, the agent requests sequentially an oracle to evaluate the output for any input. This function is not supposed to be convex and may display many local optima. In this work we tackle the challenging case where the evaluations are expensive, which requires to design a careful selection of the input to evaluate. We study two different goals, either to maximize the sum of the rewards received at each iteration, or to maximize the best reward found so far. The present thesis comprises the field of global optimization where the function is a realization from a known stochastic process, and the novel field of optimization by ranking where we only perform function value comparisons. We propose novel algorithms and provide theoretical concepts leading to performance guarantees. We first introduce an optimization strategy for observations received by batch instead of individually. A generic study of local supremum of stochastic processes allows to analyze Bayesian optimization on nonparametric search spaces. In addition, we show that our approach extends to natural non-Gaussian processes. We build connections between active learning and ranking and deduce an optimization algorithm of potentially discontinuous functions
Ben, salem Malek. "Model selection and adaptive sampling in surrogate modeling : Kriging and beyond". Thesis, Lyon, 2018. https://tel.archives-ouvertes.fr/tel-03097719.
Texto completo da fonteSurrogate models are used to replace an expensive-to-evaluate function to speed-up the estimation of a feature of a given function (optimum, contour line, …). Three aspects of surrogate modeling are studied in the work:1/ We proposed two surrogate model selection algorithms. They are based on a novel criterion called the penalized predictive score. 2/ The main advantage of probabilistic approach is that it provides a measure of uncertainty associated with the prediction. This uncertainty is an efficient tool to construct strategies for various problems such as prediction enhancement, optimization or inversion. We defined a universal approach for uncertainty quantification that could be applied for any surrogate model. It is based on a weighted empirical probability measure supported by cross-validation sub-models’ predictions.3/ We present the so-called Split-and-Doubt algorithm that performs sequentially both feature estimation and dimension reduction
Boulet, Jean-François. "Optimisation simultanée des processus de production et des stratégies de maintenance". Mémoire, École de technologie supérieure, 2007. http://espace.etsmtl.ca/560/1/BOULET_Jean%2DFran%C3%A7ois.pdf.
Texto completo da fonteVirin, Teddy. "Modélisation, optimisation et contrôle d'un processus d'épandage pour les applications agricoles". Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2007. http://tel.archives-ouvertes.fr/tel-00717799.
Texto completo da fonteJansen, Yann. "Modélisation et optimisation du processus de formage de pièces en zinc". Thesis, Paris, ENMP, 2013. http://www.theses.fr/2013ENMP0055.
Texto completo da fonteThe aim of this study is to predict the rupture of Zinc alloy sheets by the mean of Finite Element Method simulations. The mechanical behaviour of the material has been tested by tensile tests for several directions and for several Zinc grades. The materials show a high anisotropic mechanical response and high strain rate and temperature sensitivity. This set of experimental data has been modelled by the mean of the Norton Hoff law and the Hill 48 plastic criterion. Moreover, the formability has been tested by tensile and plane strain tests, and also hydraulic bulge tests. A high anisotropic formability, unseen in the literature, has been observed. This formability is modelled with different rupture criteria coming from the literature or specifically developed for the Zinc alloy study. A stress criterion model has been chosen to predict the formability. This criterion has been implemented into Forge2009® software. Academic and industrial forming processes have been simulated with Forge2009® and lead to an accurate description of the mechanical behaviour and the rupture localisation
Boulet, Jean-François. "Optimisation simultanée des processus de production et des stratégies de maintenance /". Thèse, Montréal : École de technologie supérieure, 2007. http://proquest.umi.com/pqdweb?did=1456294201&sid=2&Fmt=2&clientId=46962&RQT=309&VName=PQD.
Texto completo da fonte" Mémoire présenté à l'École de technologie supérieure comme exigence partielle à l'obtention de la maîtrise en génie de la production automatisée." CaQMUQET Bibliogr. : f. [143]-145. Également disponible en version électronique. CaQMUQET
Balme, Quentin. "Etude paramétrique et optimisation d'un processus de combustion de charges organiques". Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAI091/document.
Texto completo da fonteThe LPTI / CEA (Innovative Thermal Processes Laboratory) is developing a process for the incineration-vitrification of radioactive waste. The first step consists in the elimination of the organic charge (polymer) by incineration of the waste suspended in an oven. The objective of the work presented in this document is to study the operating parameters likely to modify the rate of degradation of the polymers, in order to optimize the incineration step.In this study, the degradation rate is measured by the mass loss of the compounds. A macro-thermobalance allowing to work with masses of polymer (polyethylene and neoprene) going from 5g to 65g was developed in order to carry out the parametric studies (mass of sample, temperature of the furnace, percentage of oxygen in gas, type of container) needed to evaluate the degradation rates of polyethylene and neoprene. These studies will then be extended to evaluate the kinetics of combustion of complex organic systems confined in different vectors.In parallel, two models were developped. The first describes the gas phase by CFD and the polymer by a "0D" model considering the homogeneous temperature in the sample, and the second is describing the two phases by CFD (Computational Fluid Dynamic). The objective of these models, solved in transient mode, is to calculate the rate of degradation of the polyethylene during its combustion in the macro-thermobalance to describe the behavior observed experimentally.Experimental and modeling results show the importance of flame position and heat transfer in the polymer on its rate of degradation. For the neoprene whose degradation produces carboneous residues (char), it is shown experimentally that the stage of oxidation of the char is, at the study temperatures (> 600 ° C), limited by the transfer of oxygen in the solid residues
Mauger, Clémence. "Optimisation de l'utilité des données lors d'un processus de k-anonymisation". Electronic Thesis or Diss., Amiens, 2021. http://www.theses.fr/2021AMIE0076.
Texto completo da fonteSo that providing privacy guarantees to anonymized databases, anonymization models have emerged few decades ago. Among them, you can find k-anonymity, l-diversity, t-proximity or differential confidentiality. In this thesis, we mainly focused on the k-anonymity model through an in-depth analysis of the ways to produce databases that meet these confidentiality criteria while optimizing data utility. From a table, you can consider the set of its k-anonymous versions, which can be of exponential cardinality according to k. In a vacuum, these k-anonymous versions can be scored thanks to the amount of data modification that is correlated to the data utility. Thus, this work proposes a study of how to optimize the data utility during the process of k-anonymizing a database.First, we studied information loss metrics to estimate the amount of information lost in a table during a k-anonymization process. The metrics were used within a k-anonymization algorithm to guide equivalence class mergers leading to the production of a k-anonymous table. We tried to identify from this study characteristics in the definitions of information loss metrics allowing the production of good quality k-anonymous tables with regard to several criteria.Second, we were interested in the distribution of sensitive data into k-anonymous tables by using l-diversity and t- proximity models. More specifically, we proposed optimization strategies combining information loss metrics, l-diversity and t-proximity to be used during a k-anonymization process. The aim was then to preserv good levels of l-diversity and t-proximity of the k-anonymous tables produced, and this without sacrificing the data utility.Third, we tackled the question of the formulation of the problem of k-anonymization of a table. We relied on the original notion of generalization groups, to state the problem of k-anonymization of a table according to the incidence matrix of its associated hypergraph. Thanks to this new representation, we proposed an original procedure, declined to five algorithms, allowing to build a k-anonymous table by partitioning the equivalence classes of a k’-anonymous table with k′ >= k. Experiments carried out on two public tables have shown that the proposed algorithms outperfom the k-anonymization algorithm used previously in terms of information preservation
Baysse, Camille. "Analyse et optimisation de la fiabilité d'un équipement opto-électrique équipé de HUMS". Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00986112.
Texto completo da fonteMercier, Sophie. "Modèles et optimisation de politiques de maintenance de systèmes". Université de Marne-la-Vallée, 2000. http://www.theses.fr/2000MARN0087.
Texto completo da fonteEl, Falou Salah. "Programmation répartie, optimisation par agent mobile". Phd thesis, Université de Caen, 2006. http://tel.archives-ouvertes.fr/tel-00123168.
Texto completo da fonteet d'échanger des informations entre ces différentes entités. Les agents
mobiles apparaissent dans ce contexte comme une solution prometteuse
permettant la construction d'applications flexibles, adaptables aux
contraintes de l'application et de l'environnement d'exécution. Dans
cette thèse, la mobilité est étudiée sous deux angles. D'une part,
l'envoi du code sur le serveur permet d'adapter les services distants
aux exigences du client ce qui permet la réduction du trafic réseau.
D'autre part, une machine surchargée peut déléguer l'exécution de
certaines de ces tâches à une autre machine ce qui permet de gagner au
niveau du temps d'exécution. Une architecture basée sur la technologie
d'agents mobiles est proposée. Elle permet l'équilibrage de charge dans
une application répartie. L'architecture proposée est décentralisée et
l'équilibrage de charge se fait d'une façon dynamique. Un agent mobile
collecteur est utilisé afin de construire une vision globale du système.
Pour la réduction du trafic, nous proposons la communication par un
agent intelligent hybride. L'agent utilise ainsi deux modes,
client/serveur ou migration (échange locale), pour sa communication. Le
processus décisionnel de Markov est utilisé pour trouver la politique
optimale du déplacement de l'agent. Un travail d'expérimentation sur des
problèmes concrets permet de valider les algorithmes proposés.
Lopez, Arellano Raquel. "La qualité de comprimés à libération prolongée : optimisation du processus de fabrication". Lyon 1, 1990. http://www.theses.fr/1990LYO1T119.
Texto completo da fonteUltré, Pierre. "Analyse critique et optimisation d'un processus de production d'éléments préfabriqués en bois". Master's thesis, Université Laval, 2018. http://hdl.handle.net/20.500.11794/33041.
Texto completo da fonteCurrently, industrialized housing is facing an increased demand while a lot of small producers have to deal with a lack of capacity and non-automated production processes. This master thesis focuses on the way to improve the production processes in this field in order to gain efficiency. First, a literature review leads to a comprehensive state of the art concerning scheduling and production management practices in industrialized housing. It compares this sector to the traditional construction field and the manufacturing industry. It then identifies the various problems enterprises may face while suggesting potential solutions. Thereafter a case study involving a company specialized in producing prefabricated elements is studied in detail. The focus is on the processes conducted to manufacture panels. Improvements that should be implemented in order to decrease operation times as well as better use the production capacity while respecting the labor’s availability constraint are proposed. A simulation model is therefore developed to test those suggestions and to evaluate the gain each solution can generate so as to meet a rise in demand. Results show that the most interesting improvements are those involving to invest in automation technologies.
Calandre, Thibaut. "Optimisation d'observables de premier passage pour des processus de diffusion intermittents confinés". Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066141/document.
Texto completo da fonteIn this thesis, we study first-passage properties for an intermittent Brownian motion inside a confining domain. We consider a minimal model of heterogenous catalysis in which a molecule performs surface-mediated diffusion inside a confining domain whose boundary contains catalytic sites. We obtain results for several observables : (i) the mean first-passage time to reach a target, (ii) the splitting probabilities that the molecule reach a specific target, (iii) the covered territory on the confining surface before the molecule exits the domain, (iv) the probability of reacting with catalytic sites. These results are exact for point like-targets, and are shown to be accurate also for extended targets, located on the surface or inside the bulk. Depending of the relative positions of the entrance and exit points, very different behaviors with respect to the mean adsorption time of the molecule on the surface are found. Although non-intuitive for bulk targets, it is found that boundary excursions, can minimize the search time. We also present a simple model of an ordered porous media. We present two models : (i) for a simple Brownian motion, (ii) for a surface-mediated diffusion with a parameter of persistence b. This model leads to a less simple result for the efficient diffusion coefficient. Our main result shows that in the limit of non persistence (b=0), both results are the same. We also provide an analysis of the behaviors of the efficient diffusion coefficient with respect to the mean adsorption time, showing optimisation possibilities. Numericals Monte-Carlo simulations and finite element solver have been used to evaluate our theoretical results
Aliakbari, Alireza. "Étude et optimisation des processus d’usinage utilisant des dispositifs à amortissement passif". Electronic Thesis or Diss., Paris, HESAM, 2023. http://www.theses.fr/2023HESAE002.
Texto completo da fonteThe most significant and critical objectives of all industrial units nowadays are mass production at a lower cost in terms of material and resources and machining industry is not and exception. Vibration is one of the most harmful problems facing machining process, which significantly affects the quality of the final product. Machine tools vibration is a result of generated forces in the process. During the last decades many research has been done in relation to this topic from different aspects. Understanding, modeling, detection, suppression, etc. have recently been the aim of previous research that has led to various methodologies to avoid chatter and machine tools vibration.This project with the title of " Study and optimization of machining processes using passive damping devices " is a CIFRE Ph.D. contract that stands for Industrial Agreement of Training through Research. The proposal is granted by ANRT (Association Nationale de la Recherche et de la Technologie) in collaboration with the SECO company and AMVALOR (Arts et Métiers commercial branch). This project brings together SECO R&D and LISPEN Laboratory to boost Steadyline product efficiency and productivity by undertaking technical and scientifical investigation of the dynamics behavior of the process using these devices. Steadyline is a family of anti-vibration toolholders known for minimizing or even eliminating vibration during all sorts of machining processes. The key principle of the Steadyline is a tuned mass damper which mitigate the toolholder vibration resulting in better surface finish and higher productivity. This kind of solution, adopted by most of the toolholder maker in the world, become popular among the machinist, but the feed-back from machinists points out some drawbacks like insufficient robustness and erratic efficiency. The key objectives of this research are to analyze existing Steadyline behavior, both with and without machining process, and to propose solutions to improve its efficiency and to better mitigate chatter problems. These solutions are intended to be fed to industry and the market in an later industrialization phase
Groc, Michel. "Optimisation et conduite de procédés sous haut flux d'énergie : soudage laser". Perpignan, 1998. http://www.theses.fr/1998PERP0341.
Texto completo da fontePlassart, Stéphan. "Optimisation en-ligne pour les systèmes dynamiques en temps-réel". Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM017.
Texto completo da fonteThe energy consumption is a crucial issue for real-time systems,that's why optimizing it online, i.e. while the processor is running, has become essential and will be the goal of this thesis.This optimization is done by adapting the processor speed during the job execution.This thesis addresses several situations with different knowledge on past, active and future job characteristics.Firstly, we consider that all job characteristics are known (the offline case),and we propose a linear time algorithm to determine the speed schedule to execute n jobs on a single processor.Secondly, using Markov decision processes, we solve the case where past and active job characteristics are entirely known,and for future jobs only the probability distribution of the jobs characteristics (arrival times, execution times and deadlines) are known.Thirdly we study a more general case: the execution is only discovered when the job is completed.In addition we also consider the case where we have no statistical knowledge on jobs,so we have to use learning methods to determine the optimal processor speeds online.Finally, we propose a feasibility analysis (the processor ability to execute all jobs before its deadline when it works always at maximal speed) of several classical online policies,and we show that our dynamic programming algorithm is also the best in terms of feasibility
Ouazène, Yassine. "Maîtrise des systèmes industriels : optimisation de la conception des lignes de production". Thesis, Troyes, 2013. http://www.theses.fr/2013TROY0025/document.
Texto completo da fonteDuring the design phase of a production system, all functional and technological alternatives should be explored in order to propose the best possible solutions. This often results in a combination of several sub-problems such as: selection of pieces of equipments from a set of candidate solutions for each manufacturing operation, dimensioning and allocation of buffers and storage areas, balancing workload among the different workstations, the specification of the type and capacity of the material handling system and the layout of equipments which consists of determining which workstations should be adjacent to each other and how they should be connected.In this context, we were interested in performance evaluation and optimization of serial production lines which are very common in high volume production systems.We have proposed a new analytical method, known as « Equivalent Machines Method» to evaluate the production line throughput. This method has the advantages to be more accurate and faster than the existing approaches in the literature.We have also established the relevance of this method for evaluating the production rate of series-parallel systems and other serial lines with machines having multiple failure modes.We have also developed a new algorithm based on nonlinear programming approach to solve the buffer allocation problem
Cocquempot, Vincent. "Surveillance des processus industriels complexes : génération et optimisation des relations de redondance analytiques". Lille 1, 1993. http://www.theses.fr/1993LIL10053.
Texto completo da fonteMassoulié, Laurent. "Stabilite, simulation et optimisation des systemes a evenements discrets". Paris 11, 1995. http://www.theses.fr/1995PA112172.
Texto completo da fonteYu, Haiyang. "Optimisation de la fiabilité et de la disponibilité des systèmes à composants dépendants". Troyes, 2006. http://www.theses.fr/2006TROY0021.
Texto completo da fonteNowadays, human beings depend much than ever on the performance of man-made systems. A high level of system performance is often necessary whereas it causes huge cost in systems. Our study on system reliability/availability optimization (SR/AO) will balance the performance and the cost. In this thesis, we firstly presented the state-of-the-art on SR/AO with attention on the essential disciplines. Due to this review, we decided to combine the dynamic approach with optimization resolution, and to choose the parameters of stochastic systems as decision variables. Then, this idea of combination is tested on two simple systems. It is shown that the characters of the stochastic systems have contributed to the optimization procedure. Meanwhile, through a study on two typical systems, the dependency is recognized as a significant option to improve the system performance. In particular, the dependency essentially identifies the dynamic approaches from the static ones. Hence, we established a survey on diverse types of dependencies in the literature. Due to its importance, the redundant dependency is defined and then quantified by the so named dependence function. After that, we study two redundant systems, in the R/A of which the redundant dependency has been considered. For both optimization models, the quantitative structure induced by the dependence function has been indispensable in establishing the optimisation procedure. The effectiveness of combining stochastic analysis and optimization resolution in studying SR/AO is therefore justified
Gaudrie, David. "High-Dimensional Bayesian Multi-Objective Optimization". Thesis, Lyon, 2019. https://tel.archives-ouvertes.fr/tel-02356349.
Texto completo da fonteThis thesis focuses on the simultaneous optimization of expensive-to-evaluate functions that depend on a high number of parameters. This situation is frequently encountered in fields such as design engineering through numerical simulation. Bayesian optimization relying on surrogate models (Gaussian Processes) is particularly adapted to this context.The first part of this thesis is devoted to the development of new surrogate-assisted multi-objective optimization methods. To improve the attainment of Pareto optimal solutions, an infill criterion is tailored to direct the search towards a user-desired region of the objective space or, in its absence, towards the Pareto front center introduced in our work. Besides targeting a well-chosen part of the Pareto front, the method also considers the optimization budget in order to provide an as wide as possible range of optimal solutions in the limit of the available resources.Next, inspired by shape optimization problems, an optimization method with dimension reduction is proposed to tackle the curse of dimensionality. The approach hinges on the construction of hierarchized problem-related auxiliary variables that can describe all candidates globally, through a principal component analysis of potential solutions. Few of these variables suffice to approach any solution, and the most influential ones are selected and prioritized inside an additive Gaussian Process. This variable categorization is then further exploited in the Bayesian optimization algorithm which operates in reduced dimension
Gérard, Karine Aletti Pierre. "Optimisation automatique des incidences des faisceaux par l'algorithme du simplexe et optimisation des contrôles qualité par la Maîtrise Statistique des Processus (MSP) en Radiothérapie Conformationnelle par Modulation d'Intensité (RCMI)". S. l. : S. n, 2008. http://www.scd.inpl-nancy.fr/theses/2008_GERARD_K.pdf.
Texto completo da fonteRaufaste, Eric. "Psychologie Ergonomique de l'Optimisation des Processus Décisionnels". Habilitation à diriger des recherches, Université Toulouse le Mirail - Toulouse II, 2003. http://tel.archives-ouvertes.fr/tel-00316445.
Texto completo da fonteLe chapitre I présente la méthode que nous avons développée, « l'Analyse Rationnelle des Situations de Travail ». Les valeurs de rationalité y sont définies comme les dimensions de l'activité cognitive sur lesquelles opère l'optimisation. Une exploration empirique en médecine d'urgence est rapportée, au regard notamment des va-leurs de richesse, pertinence et flexibilité. Le chapitre II aborde la question du choix des cadres normatifs utilisables pour la pertinence et la flexibilité. Pour la pertinence, des expériences montrent, dans le diagnostic radiologique, la meilleure plausibilité psychologique de la théorie des possibilités comparée à la théorie des probabilités. Pour la flexibilité, nous présentons le tout premier test d'une axiomatisation du raisonnement non monotone, le système P.
Au niveau descriptif, nos travaux visent à comprendre les processus qui gèrent les valeurs de rationalité. Les chapitres III à VI présentent le développement d'un modèle théorique global des processus qui gèrent les valeurs de richesse (chapitre III), pertinence (chapitres IV et V), et flexibilité (chapitres VI), respectivement. Des expériences menées auprès de radiologues et de médecins urgentistes sont présentées. Les chapitres III et IV sont basés sur les mécanismes associationnistes de notre modèle théorique initial. Le chapitre IV présente des investigations en vue d'intégrer un module affectif dans notre modèle théorique. Trois modèles structuraux y soutiennent la plausibilité de l'heuristique d'affect dans le jugement et la décision. Le chapitre VI constitue une extension de notre modèle théorique général en vue de le rendre capable de rendre compte de la flexibilité. Bien que la flexibilité soit généralement conçue comme l'adaptation de la représentation aux données externes, nous fournissons à la fin de ce chapitre des données sur une autre forme de flexibilité : l'ajustement des données à la représentation.
La partie projet du document commence par un examen critique des travaux présentés dans la partie bilan. La substance du projet est ensuite exposée en trois chapitres. Le premier traite des investigations expérimentales que nous projetons de réaliser. Le deuxième présente nos projets de simulations informatiques. Le troisième et dernier volet est consacré à l'exposé de recherches plus orientées vers la recherche appliquée.
Habibi, Muhammad Khoirul Khakim. "Optimisation d'une chaîne logistique inverse avec la prise en compte des processus de désassemblage". Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEM005.
Texto completo da fonteThis dissertation supports and proposes better management in the implementation of the cir-cular economy by integrating activities of reverse supply chains. The hypothesis states thatintegrating decisions of at least two activities in reverse supply chain leads to better decisionsnotably the collection of End-of-Life products and their disassembly.A deterministic problem, called Collection-Disassembly Problem, integrating both collectionand disassembly processes as well as its formulation are introduced and developed. Due to lackof available instances in the literature, some instances are generated. Another non-integratedformulation is developed and solved using the commercial solver CPLEX. The obtained resultsshow that the integrated model proposes better decisions in terms of total cost. Some ap-proximate methods are developed because the commercial solver CPLEX is unable to provideoptimal solutions under acceptable CPU times notably for large size instances.An extended version of the problem is introduced due to the fact that reverse supply chainsfrequently deal with the uncertainty of certain parameters such as the quality and the quantityof End-of-Life products as well as the demands of components. Also, there is often more thanone vehicle available to collect the products. Thus, this second problem suggested which is calledStochastic Multi-Vehicle Collection-Disassembly Problem and its formulation is developed. Theproblem is formalised as two-stage stochastic programming by assuming that the parametersunder uncertainty follow some known probability distributions and their realisation comes afterthe planning stage. To provide the solutions, two methods combined with an algorithmicframework of Sample Average Approximation are developed.Another problem called Integrated Procurement-Disassembly Problem is also studied. Alongwith the decisions on collection process, this problem emphasises on the decisions of disassemblyline balancing problem
Groff, Arnaud. "Optimisation de l'innovation par l'élaboration d'un processus de créativité industrielle : cas de l'industrie automobile". Paris, ENSAM, 2004. http://www.theses.fr/2004ENAM0012.
Texto completo da fonteToday, the complexity and the competitiveness of markets ask to the companies to favour the innovation. This innovation is the optimal fruit of the collaboration of the actors of the design, so multi-field they are. Now in the automotive sector the oligopolistique nature of the market obliges the builders to go to the optimization of the innovation to survive. We thus worked on this problem in the automotive sector by adopting an at the same moment organizational approach (professions) and procedural (actions). Having made a state of the art concerning the innovation in industrial environment, our positioning in search-action with a big automotive group allowed us to advance the problem of optimization of the activity of creativity in the process of design as being a major source of gain in innovation in the automotive environment. These research works thus have for object to formalize the organization and the management of the activity of search for creative and innovative solutions in design of manufactured goods. The final goal being to optimize the innovation in design of products by improving the efficiency and the equivalence of the work between the team's projects and the services creativity-innovation. We thus advanced the problem of integration of the activity of creativity in the process of design as being a major source of gain in innovation in the automotive environment. Our hypotheses and our experimental works then allowed us to elaborate a Process of Industrial Creativity and its tool of piloting to integrate the complexity of the professions and the innovation and so, answer our problem
Today, the complexity and the competitiveness of markets ask to the companies to favour the innovation the optimal fruit of collaboration actors of the design, so multi-fied they are. Now in the automotive sector the oligopolistique nature of the market obliges the builders to go to the optimization of the innovation to survive. We thus worked on this problem in the automotive sector by adopting an at the same moment organizational approach (professions) and procedural (actions). Having made a state of the art concerning the innovation in industrial environment, our positioning in search-action with a big automotive group allowed us to advance the problem of optimization of the activity of creativity in the process of design as being as being a major source of gain in innovation in the automotive environment. These research works thus have for object to formalize the organization and management of the activity of search for creative and innovation solutions in design of manufactured goods. The final goal being to optimize the innovation in design products by improving the efficiency and equivalence of the work between the team’s projects and the services creativity-innovation. We thus advanced the problem of integration of the activity of creativity in process of design as being a major source of gain in innovation in the automotive environment. Our hypotheses and our experimental works then allowed us to elaborate a process of industrial creativity and its tool of piloting to integrate the complexity of the professions and the innovation and so, answer our problem
Rupprecht, Jean-Francois. "Optimisation de processus de recherche par des marcheurs aleatoires symetriques, avec biais ou actifs". Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066488/document.
Texto completo da fonteRandom search processes can model nuclear reactions or animal foraging. In this thesis, we identify optimal search strategies which minimize the mean first passage time (MFPT) to a target for various processes. First, for symmetric and biased Brownian particles, we compute the distribution of exit times through an opening within the boundary of angular sectors, annuli and rectangles. We conclude on the optimizability of the MFPT in terms of geometric parameters. Second, for walks that switch between volume and surface diffusions, we determine the mean exit time through an opening inside the bounding surface. Under analytical criteria, an optimal desorption rate minimizes the MFPT. We justify that this optimality is a general property through a study of the roles of the geometry, of the adsorption properties and of a bias in the bulk random walk. Third, for active walks composed of straight runs interrupted by reorientations in a random direction, we obtain the expression of the optimal reorientation rate which minimizes the MFPT to a centered spherical target within a spherical confinement, in two and three dimensions. In a last chapter, we model the motion of eukaryotic cells by active Brownian walks. We explain an experimental observation: the persistence time is exponentially coupled with the speed of the cell. We also obtain a phase diagram for each type of trajectories. This model is a first step to quantify the search efficiency of immune cells in terms of a minimal number of biological parameters
Ramirez, serrano Oscar. "Optimisation et pilotage du processus d’innovation de medicalex : application aux implants orthopédiques sur mesure". Thesis, Paris, ENSAM, 2016. http://www.theses.fr/2016ENAM0080/document.
Texto completo da fonteFor a majority of companies, innovation has become an essential element of corporate sustainability.However, during the product development phase the designer is forced to make decisions on design. These decisions subsequently have a strong impact on the acceptability of the products by the market. Orthopaedics implants design is no exception.The aim of this thesis is to show that it is possible to integrate, in the early phases of the orthopaedic implant development process, a tool for decision-making; in choosing concepts that incorporate technical, medical, surgical and economic criteria. The goal is to reduce the risk for the patient but also reduce the risk of commercial failure.The first part of this document focuses on analysing the context of this study, particularly for medical devices. Following this, the state of the art studies the various innovation processes and more specifically, the design process of medical orthopaedic implants. This work forced us to question and analyse evaluation methods for potential innovations available in the scientific literature, and integrate them into the implant sector.From this analysis, a question emerged lying at the heart of our research: how to design innovative implants with a low level of risk (clinical risk, default risk, risk of use)?The answer to this question has resulted in the proposal of a new tool for decision-making (Hypothesis 1) integrated within upstream phases of a suitable decision process (Hypothesis 2) for developing implants. These were then checked and validated in three experiments that resulted in the development of orthopaedic implants that were successfully inserted into patients
Gauthier, Alexis, e Alexis Gauthier. "Processus interactif d'optimisation avec prise en charge des préférences de l'utilisateur". Master's thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/37034.
Texto completo da fonteTableau d’honneur de la Faculté des études supérieures et postdoctorales, 2019-2020.
Un décideur utilisant un système d’optimisation peut se voir offrir une solution qu’il juge inadéquate. Il lui est possible, s’il utilise un système interactif de réoptimisation, d’ajouter une contrainte ou une préférence et de requérir une nouvelle solution. Cependant, les préférences émises quant aux multiples valeurs composant la solution sont généralement perdues au fil des itérations successives. Pour remédier à ce problème, ce mémoire propose une approche pour la prise en compte des préférences de l’utilisateur. Celle-ci fait appel aux techniques de la programmation mathématique avec cible. Une méthodologie pour la mise en application de l’approche est également proposée. Finalement, une comparaison est effectuée entre l’approche proposée et une approche par heuristiques pour le problème de planification interactive des cotisations et retraits d’un Régime enregistré d’épargne étude. Dans les deux cas, les prototypes permettent d’ajuster en temps réel, et à la pointe de la souris, les solutions sur des graphiques interactifs. Le prototype mu par des heuristiques spécifiques ne permet pas à l’utilisateur d’atteindre toutes les solutions admissibles, notamment à cause de problèmes d’ajustements circulaires où l’utilisateur peut se retrouver au même point après quelques itérations. Le prototype utilisant l’approche proposée de programmation mathématique avec cibles permet à l’utilisateur de naviguer de façon cohérente à travers l’espace solution. Dans la plupart des contextes, cette méthode devrait permettre au décideur d’accéder plus facilement à sa solution préférée.
Un décideur utilisant un système d’optimisation peut se voir offrir une solution qu’il juge inadéquate. Il lui est possible, s’il utilise un système interactif de réoptimisation, d’ajouter une contrainte ou une préférence et de requérir une nouvelle solution. Cependant, les préférences émises quant aux multiples valeurs composant la solution sont généralement perdues au fil des itérations successives. Pour remédier à ce problème, ce mémoire propose une approche pour la prise en compte des préférences de l’utilisateur. Celle-ci fait appel aux techniques de la programmation mathématique avec cible. Une méthodologie pour la mise en application de l’approche est également proposée. Finalement, une comparaison est effectuée entre l’approche proposée et une approche par heuristiques pour le problème de planification interactive des cotisations et retraits d’un Régime enregistré d’épargne étude. Dans les deux cas, les prototypes permettent d’ajuster en temps réel, et à la pointe de la souris, les solutions sur des graphiques interactifs. Le prototype mu par des heuristiques spécifiques ne permet pas à l’utilisateur d’atteindre toutes les solutions admissibles, notamment à cause de problèmes d’ajustements circulaires où l’utilisateur peut se retrouver au même point après quelques itérations. Le prototype utilisant l’approche proposée de programmation mathématique avec cibles permet à l’utilisateur de naviguer de façon cohérente à travers l’espace solution. Dans la plupart des contextes, cette méthode devrait permettre au décideur d’accéder plus facilement à sa solution préférée.
A decision maker using an optimization system may get a solution that he considers inappropriate. It is possible for him, if he uses an interactive reoptimization system, to add a constraint or a preference and to require a new solution. However, preferences for the various values composing the solution are usually lost over the iterations. This thesis proposes an approach for taking into account the user’s preferences. It uses mathematical goal programming techniques. A methodology for implementing the approach is also proposed. Finally, a comparison is made between the proposed approach and another one using heuristics to solve the problem of interactive planning of contributions and withdrawals from a Registered Education Savings Plans. In both cases, the prototypes make it possible to adjust, in real time, and from the tip of the mouse, the solutions on interactive graphics. The prototype, moved by specific heuristics, does not allow the user to reach all admissible solutions. This is often caused by circular adjustments problems where the user may reach a previous state after some iterations. The prototype using mathematical goal programming allows the user to navigate coherently through the solution space. In most contexts, this method should make it easier for the decision maker to access his preferred solution.
A decision maker using an optimization system may get a solution that he considers inappropriate. It is possible for him, if he uses an interactive reoptimization system, to add a constraint or a preference and to require a new solution. However, preferences for the various values composing the solution are usually lost over the iterations. This thesis proposes an approach for taking into account the user’s preferences. It uses mathematical goal programming techniques. A methodology for implementing the approach is also proposed. Finally, a comparison is made between the proposed approach and another one using heuristics to solve the problem of interactive planning of contributions and withdrawals from a Registered Education Savings Plans. In both cases, the prototypes make it possible to adjust, in real time, and from the tip of the mouse, the solutions on interactive graphics. The prototype, moved by specific heuristics, does not allow the user to reach all admissible solutions. This is often caused by circular adjustments problems where the user may reach a previous state after some iterations. The prototype using mathematical goal programming allows the user to navigate coherently through the solution space. In most contexts, this method should make it easier for the decision maker to access his preferred solution.
Frécon, Jordan. "Méthodes d'optimisation pour l'analyse de processus invariants d'échelle". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEN031/document.
Texto completo da fonteScale invariance relies on the intuition that temporal dynamics are not driven by one (or a few) characteristic scale(s). This property is massively used in the modeling and analysis of univariate data stemming from real-world applications. However, its use in practice encounters two difficulties when dealing with modern applications: scaling properties are not necessarily homogenous in time or space ; the multivariate nature of data leads to the minimization of highly non-linear and non-convex functionals in order to estimate the scaling parameters.The first originality of this work is to investigate the study of non-homogenous scale invariance as a joint problem of detection/segmentation and estimation, and to propose its formulation by the minimization of vectorial functionals constructed around a total variation penalization, in order to estimate both the boundaries delimiting the changes and the scaling properties within each region.The second originality lies in the design of a branch and bound minimization procedure of non-convex functional for the full identification of the bivariate extension of fractional Brownian motion, considered as the reference for modeling univariate scale invariance. Such procedure is applied in practice on Internet traffic data in the context of anomaly detection.Thirdly, we propose some contributions specific to total variation denoising: Poisson data-fidelity model related to a state detection problem in intermittent fluorescence ; automatic selection of the regularization parameter
Nivot, Christophe. "Analyse et étude des processus markoviens décisionnels". Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0057/document.
Texto completo da fonteWe investigate the potential of the Markov decision processes theory through two applications. The first part of this work is dedicated to the numerical study of an industriallauncher integration process in co-operation with Airbus DS. It is a particular case of inventory control problems where a launch calendar has a key role. The model we propose implies that standard optimization techniques cannot be used. We then investigate two simulation-based algorithms. They return non trivial optimal policies which can be applied in actual practice. The second part of this work deals with the study of partially observable optimal stopping problems. We propose an approximation method using optimal quantization for problems with general state space. We study the convergence of the approximated optimal value towards the real optimal value. The convergence rate is also under study. We apply our method to a numerical example
Achab, Massil. "Apprentissage statistique pour séquences d’évènements à l’aide de processus ponctuels". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLX068/document.
Texto completo da fonteThe guiding principle of this thesis is to show how the arsenal of recent optimization methods can help solving challenging new estimation problems on events models.While the classical framework of supervised learning treat the observations as a collection of independent couples of features and labels, events models focus on arrival timestamps to extract information from the source of data.These timestamped events are chronologically ordered and can't be regarded as independent.This mere statement motivates the use of a particular mathematical object called point process to learn some patterns from events.Two examples of point process are treated in this thesis.The first is the point process behind Cox proportional hazards model:its conditional intensity function allows to define the hazard ratio, a fundamental quantity in survival analysis literature.The Cox regression model relates the duration before an event called failure to some covariates.This model can be reformulated in the framework of point processes.The second is the Hawkes process which models how past events increase the probability of future events.Its multivariate version enables encoding a notion of causality between the different nodes.The thesis is divided into three parts.The first focuses on a new optimization algorithm we developed to estimate the parameter vector of the Cox regression in the large-scale setting.Our algorithm is based on stochastic variance reduced gradient descent (SVRG) and uses Monte Carlo Markov Chain to estimate one costly term in the descent direction.We proved the convergence rates and showed its numerical performance on both simulated and real-world datasets.The second part shows how the Hawkes causality can be retrieved in a nonparametric fashion from the integrated cumulants of the multivariate point process.We designed two methods to estimate the integrals of the Hawkes kernels without any assumption on the shape of the kernel functions. Our methods are faster and more robust towards the shape of the kernels compared to state-of-the-art methods. We proved the statistical consistency of the first method, and designed turned the second into a convex optimization problem.The last part provides new insights from order book data using the first nonparametric method developed in the second part.We used data from the EUREX exchange, designed new order book model (based on the previous works of Bacry et al.) and ran the estimation method on these point processes.The results are very insightful and consistent with an econometric analysis.Such work is a proof of concept that our estimation method can be used on complex data like high-frequency financial data
Waycaster, Garrett. "Stratégies d'optimisation par modélisation explicite des différents acteurs influençant le processus de conception". Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30122/document.
Texto completo da fonteThe commercial success or failure of engineered systems has always been significantly affected by their interactions with competing designs, end users, and regulatory bodies. Designs which deliver too little performance, have too high a cost, or are deemed unsafe or harmful will inevitably be overcome by competing designs which better meet the needs of customers and society as a whole. Recent efforts to address these issues have led to techniques such as design for customers or design for market systems. In this dissertation, we seek to utilize a game theory framework in order to directly incorporate the effect of these interactions into a design optimization problem which seeks to maximize designer profitability. This approach allows designers to consider the effects of uncertainty both from traditional design variabilities as well as uncertain future market conditions and the effect of customers and competitors acting as dynamic decision makers. Additionally, we develop techniques for modeling and understanding the nature of these complex interactions from observed data by utilizing causal models. Finally, we examine the complex effects of safety on design by examining the history of federal regulation on the transportation industry. These efforts lead to several key findings; first, by considering the effect of interactions designers may choose vastly different design concepts than would otherwise be considered. This is demonstrated through several case studies with applications to the design of commercial transport aircraft. Secondly, we develop a novel method for selecting causal models which allows designers to gauge the level of confidence in their understanding of stakeholder interactions, including uncertainty in the impact of potential design changes. Finally, we demonstrate through our review of regulations and other safety improvements that the demand for safety improvement is not simply related to ratio of dollars spent to lives saved; instead the level of personal responsibility and the nature and scale of potential safety concerns are found to have causal influence on the demand for increased safety in the form of new regulations
Rageul, Nicolas. "Vers une optimisation du processus d'analyse en ligne de données 3D : cas des fouilles archéologiques". Thesis, Université Laval, 2007. http://www.theses.ulaval.ca/2007/24791/24791.pdf.
Texto completo da fonteKarpouzas, Yannis. "Modelisation des processus biologiques : identification, optimisation et controle optimal des systemes lineaires ou non lineaires". Paris 6, 1987. http://www.theses.fr/1987PA066449.
Texto completo da fonteHerro, Ziad. "Optimisation du processus de croissance PVT pour l'obtention des cristaux de SiC de haute qualité". Montpellier 2, 2004. http://www.theses.fr/2004MON20052.
Texto completo da fonteKarpouzas, Yannis. "Modélisation des processus biologiques identification, optimisation et contrôle optimal des systèmes linéaires ou non linéaires /". Grenoble 2 : ANRT, 1987. http://catalogue.bnf.fr/ark:/12148/cb37606342k.
Texto completo da fontePrawatya, Yopa Eka. "Multivariate optimisation and statistical process control of polymer triboelectric charging". Thesis, Poitiers, 2018. http://www.theses.fr/2018POIT2262/document.
Texto completo da fonteThe main objective of the thesis was to study the triboelectric behavior of dry sliding contacts between polymeric materials (ABS, PE, PP, PS and two types of PVC), including the possibility to control and optimize the results in terms of either surface charge generated or wear. A linear tribometer with tribocharging capabilities was designed and built, to enable the study of the sliding contact between solids and to allow the adjustment of main tribocharging control variables: normal force, sliding speed, time and stroke. This device also provided measurement data to characterize the friction condition: the variations of the normal and tangential forces, as well as the relative displacement between the specimens. Furthermore, the electric charge generated and temperature raise due to rubbing on the surface of the polymer were measured, so that to investigate the relationship between the tribological properties. The experiments showed that the level and distribution of the charge generated by dry friction depends on the normal force applied, friction time (cycle), sliding speed, material mechanical properties and surface roughness or texture. Corona discharge may be used to provide initial charge on the surfaces before sliding. Modelling of tribocharging processes was done using the design of experiments methodology. The models can be used to predict and optimize the tribocharging. Control charts were used to monitor the process and detect the special causes of variation in the charge generated by triboelectric effect
Balesdent, Mathieu. "Optimisation multidisciplinaire de lanceurs". Phd thesis, Ecole centrale de Nantes, 2011. http://tel.archives-ouvertes.fr/tel-00659362.
Texto completo da fonteMohammadi, Hossein. "Kriging-based black-box global optimization : analysis and new algorithms". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEM005/document.
Texto completo da fonteThe Efficient Global Optimization (EGO) is regarded as the state-of-the-art algorithm for global optimization of costly black-box functions. Nevertheless, the method has some difficulties such as the ill-conditioning of the GP covariance matrix and the slow convergence to the global optimum. The choice of the parameters of the GP is critical as it controls the functional family of surrogates used by EGO. The effect of different parameters on the performance of EGO needs further investigation. Finally, it is not clear that the way the GP is learned from data points in EGO is the most appropriate in the context of optimization. This work deals with the analysis and the treatment of these different issues. Firstly, this dissertation contributes to a better theoretical and practical understanding of the impact of regularization strategies on GPs and presents a new regularization approach based on distribution-wise GP. Moreover, practical guidelines for choosing a regularization strategy in GP regression are given. Secondly, a new optimization algorithm is introduced that combines EGO and CMA-ES which is a global but converging search. The new algorithm, called EGO-CMA, uses EGO for early exploration and then CMA-ES for final convergence. EGO-CMA improves the performance of both EGO and CMA-ES. Thirdly, the effect of GP parameters on the EGO performance is carefully analyzed. This analysis allows a deeper understanding of the influence of these parameters on the EGO iterates. Finally, a new self-adaptive EGO is presented. With the self-adaptive EGO, we introduce a novel approach for learning parameters directly from their contribution to the optimization