To see the other types of publications on this topic, follow the link: Optimization criterion.

Dissertations / Theses on the topic 'Optimization criterion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Optimization criterion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Song, Qiang. "Non-euler-lagrangian pareto-optimality conditions for dynamic multiple-criterion decision problems." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/24920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rew, Dong-Won. "New feedback design methodologies for large space structures: a multi-criterion optimization approach." Diss., Virginia Polytechnic Institute and State University, 1987. http://hdl.handle.net/10919/49875.

Full text
Abstract:
A few problems of designing structural control systems are addressed, considering optimization of three design objectives: state error energy, control energy and stability robustness. Tradeoff relationships among these selected design objectives are investigated by solving multiple objective optimization problems. Various measures of robustness (tolerance of model errors and disturbances) are also reviewed carefully in the present study and throughout the dissertation, robust control design methodologies are emphasized. Presented in the first part of the dissertation are three new feedback design algorithms: 1) a generalized linear-quadratic regulator (LQR) formulation, 11) a generalized LQR formulation based on Lyapunov stability theorem, and 111) an eigenstructure assignment method using Sylvester's equation. The performance of these algorithms for multi-criterion optimizations are compared by generating three dimensional surfaces of wh1ch d1splay the tradeoff among the three design objectives. In the second part, a noniterative robust e1genstructure assignment algorithm via a projection method is introduced. This algorithm produces a fairly well-conditioned eigenvector matrix and provides an excellent starting solution for optimizations of various design criteria. We also present a specialized version of the projection method for second order differential equatlons, wh1ch offers useful insights to design strategies in regards to conditioning (robustness) of the eigenvectors. Finally, to illustrate the ideas presented in this study, we adopt numerical examples in two sets: 1) 6th order mass-spring systems and 11) various reduced order models of a flexible system. The numerical results confirm that multi-criterion optimizations by using a minimum correction homotopy technique is a useful tool with significant potential for enhanced computer—aided design of control systems. The proposed robust eigenstructure assignment algorithm is successfully implemented and tested for a 24th reduced order model, which establishes the approach to be applicable to systems of at least moderate dimensionality. We show analytically and computationally that constraining closed—loop eigenvectors to equal open-loop eigenvectors generally does not lead to either optimal conditioning (robustness) of the closed-loop eigenvectors or minimum gain norm.
Ph. D.
incomplete_metadata
APA, Harvard, Vancouver, ISO, and other styles
3

Abreu, Jean Faber Ferreira de. "Quantum games from biophysical Hamiltonians and a sub-neuronal optimization criterion of the information." Laboratório Nacional de Computação Científica, 2006. http://www.lncc.br/tdmc/tde_busca/arquivo.php?codArquivo=108.

Full text
Abstract:
The Theory of Games is a mathematical formalism used to analyze conflicts between two or more parts. In those conflicts, each part has a group of actions (strategies) that aids them in the optimization of their objectives. The objectives of the players are the rewards (payoffs) given according to their chosen strategy. By quantizing a game, advantages in operational efficiency and in the stability of the game solutions are demonstrated. In a quantum game, the strategies are operators that act on an isolated system. A natural issue is to consider a game in an open system. In this case the strategies are changed by Kraus operators which represent a natural measurement of the environment. We want to find the necessary physical conditions to model a quantum open system as a game. To analyze this issue we applied the formalism of Quantum Operations on the Fröhlich system and we described it as a model of Quantum Game. The interpretation is a conflict among different configurations of the environment which, by inserting noise in the main system exhibits regimes of minimum loss of information. On the other hand, the model of Fröhlich has been used to describe the biophysical dynamics of the neuronal microtubules. By describing the model of Fröhlich in the Quantum Game formalism, we have shown that regimes of stability may exist even under physiological conditions. From the evolutionary point of view, the Theory of Games can be the key to describe the natural optimization at sub-neuronal levels.
A Teoria de Jogos (TJs) é um formalismo matemático usado para analisar situações de conflitos entre duas ou mais partes. Nesses conflitos, cada parte possui um conjunto de ações (estratégias) que auxilia na otimização de seus objetivos. Os objetivos dos jogadres são as recompensas (payoffs) que cada um recebe de acordo com a estratégia adotada. Ao se quantizar um jogo, mostra-se ganhos em eficiência operacional e ganhos na estabilidade das soluções. Em um jogo quântico (JQ), as estratégias são operadores que atuam num sistema isolado. Uma questão natural é considerar um jogo num sistema aberto. Nesta situação as estratégias são trocadas por operadores de Kraus que representam uma medida natural do ambiente. Nosso interesse é encontrar as condições físicas necessáriaas para modelarmos um sistema quântico aberto como um jogo. Para analisar essa questão aplicamos o formalismo de Operações Quânticas (OQs) sobre o sistema de Fröhlich e o apresentamos como um modelo de JQ. A interpretação é um conflito entre diferentes configurações do ambiente que, ao inserirem ruído no sistema principal, exibem regiões de mínima perda de informação. O modelo de Fröhlich vem sendo usado para descrever a dinâmica biofísica dos microtúbulos neuronais. Ao estruturamos o modelo de Fröhlich nos JQs, mostramos que as regiões de estabilidade podem existir sob condições fisiológicas. Usando o aspecto evolucionista, a TJs pode ser a chave para a descrição de processos de otimização da informação em nível sub-neuronal.
APA, Harvard, Vancouver, ISO, and other styles
4

Atutey, Olivia Abena. "Linear Mixed Model Selection via Minimum Approximated Information Criterion." Bowling Green State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1594910831256966.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Pei. "Simultaneously solving process selection, machining parameter optimization and tolerance design problems: A bi-criterion approach." Thesis, University of Ottawa (Canada), 2003. http://hdl.handle.net/10393/26544.

Full text
Abstract:
The selection of right process, use of optimal machining parameters and specification of best tolerance parameters have been recognized by industry as key issues to ensure product quality and reduce production cost. The three issues have thus attracted a great deal of attention over last several decades. However, they are often addressed separately in existing publications. In reality, the three issues are closely interrelated. Analyzing the three issues in isolation will inevitably lead to inconsistent, infeasible, or conflicting decisions. To avoid the drawbacks, an integrated approach is proposed to jointly solve process selection, machining parameter optimization, and tolerance design problems. The integrated problem is formulated as a bi-criterion model to handle both tangible and intangible costs. The model is solved using a modified Chebyshev goal programming method to achieve a preferred compromise between the two conflicting criteria. The application of the proposed bi-criterion approach has been demonstrated by first using the single component single part feature case. The integrated approach is then extended to the multiple components multiple part features case (the assembly case). Examples are provided to illustrate the application of the two models and the solution procedure. The results have shown that the decisions on process selection, machining parameter selection and tolerance design can be made simultaneously using the models.
APA, Harvard, Vancouver, ISO, and other styles
6

Gorsky, Daniel A. "Niyama Based Taper Optimizations in Steel Alloy Castings." Wright State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=wright1316191746.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

FIETE, ROBERT DEAN. "THE HOTELLING TRACE CRITERION USED FOR SYSTEM OPTIMIZATION AND FEATURE ENHANCEMENT IN NUCLEAR MEDICINE (PATTERN RECOGNITION)." Diss., The University of Arizona, 1987. http://hdl.handle.net/10150/184160.

Full text
Abstract:
The Hotelling trace criterion (HTC) is a measure of class separability used in pattern recognition to find a set of linear features that optimally separate two classes of objects. In this dissertation we use the HTC not as a figure of merit for features, but as a figure of merit for characterizing imaging systems and designing filters for feature enhancement in nuclear medicine. If the HTC is to be used to optimize systems, then it must correlate with human observer performance. In our first study, a set of images, created by overlapping ellipses, was used to simulate images of livers. Two classes were created, livers with and without tumors, with noise and blur added to each image to simulate nine different imaging systems. Using the ROC parameter dₐ as our measure, we found that the HTC has a correlation of 0.988 with the ability of humans to separate these two classes of objects. A second study was performed to demonstrate the use of the HTC for system optimization in a realistic task. For this study we used a mathematical model of normal and diseased livers and of the imaging system to generate a realistic set of liver images from nuclear medicine. A method of adaptive, nonlinear filtering which enhances the features that separate two sets of images has also been developed. The method uses the HTC to find the optimal linear feature operator for the Fourier moduli of the images, and uses this operator as a filter so that the features that separate the two classes of objects are enhanced. We demonstrate the use of this filtering method to enhance texture features in simulated liver images from nuclear medicine, after using a training set of images to obtain the filter. We also demonstrate how this method of filtering can be used to reconstruct an object from a single photon-starved image of it, when the object contains a repetitive feature. When power spectrums for real liver scans from nuclear medicine are calculated, we find that the three classifications that a physician uses, normal, patchy, and focal, can be described by the fractal dimension of the texture in the liver. This fractal dimension can be calculated even for images that suffer from much noise and blur. Given a simulated image of a liver that has been blurred and imaged with only 5000 photons, a texture with the same fractal dimension as the liver can be reconstructed.
APA, Harvard, Vancouver, ISO, and other styles
8

Strömberg, Eric. "Applied Adaptive Optimal Design and Novel Optimization Algorithms for Practical Use." Doctoral thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-308452.

Full text
Abstract:
The costs of developing new pharmaceuticals have increased dramatically during the past decades. Contributing to these increased expenses are the increasingly extensive and more complex clinical trials required to generate sufficient evidence regarding the safety and efficacy of the drugs.  It is therefore of great importance to improve the effectiveness of the clinical phases by increasing the information gained throughout the process so the correct decision may be made as early as possible.   Optimal Design (OD) methodology using the Fisher Information Matrix (FIM) based on Nonlinear Mixed Effect Models (NLMEM) has been proven to serve as a useful tool for making more informed decisions throughout the clinical investigation. The calculation of the FIM for NLMEM does however lack an analytic solution and is commonly approximated by linearization of the NLMEM. Furthermore, two structural assumptions of the FIM is available; a full FIM and a block-diagonal FIM which assumes that the fixed effects are independent of the random effects in the NLMEM. Once the FIM has been derived, it can be transformed into a scalar optimality criterion for comparing designs. The optimality criterion may be considered local, if the criterion is based on singe point values of the parameters or global (robust), where the criterion is formed for a prior distribution of the parameters.  Regardless of design criterion, FIM approximation or structural assumption, the design will be based on the prior information regarding the model and parameters, and is thus sensitive to misspecification in the design stage.  Model based adaptive optimal design (MBAOD) has however been shown to be less sensitive to misspecification in the design stage.   The aim of this thesis is to further the understanding and practicality when performing standard and MBAOD. This is to be achieved by: (i) investigating how two common FIM approximations and the structural assumptions may affect the optimized design, (ii) reducing runtimes complex design optimization by implementing a low level parallelization of the FIM calculation, (iii) further develop and demonstrate a framework for performing MBAOD, (vi) and investigate the potential advantages of using a global optimality criterion in the already robust MBAOD.
APA, Harvard, Vancouver, ISO, and other styles
9

Wong, Steven. "Alternative Electricity Market Systems for Energy and Reserves using Stochastic Optimization." Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/932.

Full text
Abstract:
This thesis presents a model that simulates and solves power system dispatch problems utilizing stochastic linear programming. The model features the ability to handle single period, multiple bus, linear DC approximated systems. It determines capacity, energy, and reserve quantities while accounting for N-1 contingency scenarios (single loss of either generator or line) on the network. Market systems applying to this model are also proposed, covering multiple real-time, day-ahead, and hybrid versions of consumer costing, transmission operator payment, and generator remuneration schemes. The model and its market schemes are applied to two test systems to verify its viability: a small 6-bus system and a larger 66-bus system representing the Ontario electricity network.
APA, Harvard, Vancouver, ISO, and other styles
10

Xu, Rongxin. "Optimal design of a composite wing structure for a flying-wing aircraft subject to multi-constraint." Thesis, Cranfield University, 2012. http://dspace.lib.cranfield.ac.uk/handle/1826/7290.

Full text
Abstract:
This thesis presents a research project and results of design and optimization of a composite wing structure for a large aircraft in flying wing configuration. The design process started from conceptual design and preliminary design, which includes initial sizing and stressing followed by numerical modelling and analysis of the wing structure. The research was then focused on the minimum weight optimization of the /composite wing structure /subject to multiple design /constraints. The modelling, analysis and optimization process has been performed by using the NASTRAN code. The methodology and technique not only make the modelling in high accuracy, but also keep the whole process within one commercial package for practical application. The example aircraft, called FW-11, is a 250-seat commercial airliner of flying wing configuration designed through our MSc students Group Design Project (GDP) in Cranfield University. Started from conceptual design in the GDP, a high-aspect-ratio and large sweepback angle flying wing configuration has been adopted. During the GDP, the author was responsible for the structural layout design and material selection. Composite material has been chosen as the preferable material for both the inner and outer wing components. Based on the derivation of structural design data in the conceptual phase, the author continued with the preliminary design of the outer wing airframe and then focused on the optimization of the composite wing structure. Cont/d.
APA, Harvard, Vancouver, ISO, and other styles
11

Єнотова, Марія Максимівна, and Mariia Maksymivna Yenotova. "Method of optimization of the process of managing risk factors of aviation events based on the criterion of minimum total costs." Thesis, Національний авіаційний університет, 2020. https://er.nau.edu.ua/handle/NAU/45652.

Full text
Abstract:
Робота публікується згідно наказу ректора від 21.01.2020 р. №008/од "Про перевірку кваліфікаційних робіт на академічний плагіат 2019-2020р.р. навчальному році ". Керівник проекту: к.т.н., доц. Алексєєв Олег Миколайович
The master thesis assignment to diploma work “Method of optimization of the process of managing risk factors of aviation events based on the criterion of minimum total costs” contains 26 illustrative figures, graphs, 49 formulas and 7 tables. The object of research in the work is the airline flight safety management system. The subject of the research is a method of increasing the efficiency of the airline's flight safety management system. Purpose of the investigation is to develop a method for optimizing the process of managing the risk factors of aviation events based on the criterion of minimum total costs, which makes it possible to increase the efficiency of the flight safety management system in terms of making decisions on the level of improving flight safety. Methods of investigation: in the course of the research, the methods of mathematical analysis, the theory of probability and mathematical statistics, the theory of mathematical modeling, as well as programming algorithms for computer programs were used. In this diploma work are investigated an assessment of the risks of aviation events and the amount of costs for measures that reduce the risks of aviation events, taking into account the likelihood of preventing aviation events, the value of the probability of preventing aviation events was obtained, the value of the probability of preventing aviation events can be obtained, the assessment of the total costs in the flight safety management system, aimed at eliminating possible damage from aviation events and ensuring flight safety, taking into account the probability of preventing aviation events, has been carried out.
APA, Harvard, Vancouver, ISO, and other styles
12

Shin, Sung-Hwan. "Objective-driven discriminative training and adaptation based on an MCE criterion for speech recognition and detection." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50255.

Full text
Abstract:
Acoustic modeling in state-of-the-art speech recognition systems is commonly based on discriminative criteria. Different from the paradigm of the conventional distribution estimation such as maximum a posteriori (MAP) and maximum likelihood (ML), the most popular discriminative criteria such as MCE and MPE aim at direct minimization of the empirical error rate. As recent ASR applications become diverse, it has been increasingly recognized that realistic applications often require a model that can be optimized for a task-specific goal or a particular scenario beyond the general purposes of the current discriminative criteria. These specific requirements cannot be directly handled by the current discriminative criteria since the objective of the criteria is to minimize the overall empirical error rate. In this thesis, we propose novel objective-driven discriminative training and adaptation frameworks, which are generalized from the minimum classification error (MCE) criterion, for various tasks and scenarios of speech recognition and detection. The proposed frameworks are constructed to formulate new discriminative criteria which satisfy various requirements of the recent ASR applications. In this thesis, each objective required by an application or a developer is directly embedded into the learning criterion. Then, the objective-driven discriminative criterion is used to optimize an acoustic model in order to achieve the required objective. Three task-specific requirements that the recent ASR applications often require in practice are mainly taken into account in developing the objective-driven discriminative criteria. First, an issue of individual error minimization of speech recognition is addressed and we propose a direct minimization algorithm for each error type of speech recognition. Second, a rapid adaptation scenario is embedded into formulating discriminative linear transforms under the MCE criterion. A regularized MCE criterion is proposed to efficiently improve the generalization capability of the MCE estimate in a rapid adaptation scenario. Finally, the particular operating scenario that requires a system model optimized at a given specific operating point is discussed over the conventional receiver operating characteristic (ROC) optimization. A constrained discriminative training algorithm which can directly optimize a system model for any particular operating need is proposed. For each of the developed algorithms, we provide an analytical solution and an appropriate optimization procedure.
APA, Harvard, Vancouver, ISO, and other styles
13

Mohan, Rathish. "Algorithmic Optimization of Sensor Placement on Civil Structures for Fault Detection and Isolation." University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1353156107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Heczko, Lukáš. "Regulace nestabilních soustav DP." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-217843.

Full text
Abstract:
This master´s thesis deals with control of unstable processes with two unstable complex poles using algebraic control with emphasis in value of the controller output. Thesis consists of theoretical introduction, including initial conditions with analysis of basic approaches to the control design with one or two degrees of freedom regulators by solving polynomial equations. Constrained control is studied more detailed. Optimization problem of „lowest fuel consumption“ is used for design of controller and is used for both degrees of freedom regulators with additional condition of zero variance control. Thesis includes some examples of control design by described method. Small library of MatLab functions were developed also to make the design more easier. Conclusions are made at the end and achieved goals are reviewed.
APA, Harvard, Vancouver, ISO, and other styles
15

Antunes, Miguel Ângelo Correia. "Determinação de parâmetros ótimos de materiais de proteção em capacetes para minimizar critérios de lesão." Master's thesis, Instituto Politécnico de Setúbal. Escola Superior de Tecnologia de Setúbal, 2019. http://hdl.handle.net/10400.26/27823.

Full text
Abstract:
Dissertação de Mestrado em Engenharia de Produção
A Limiting Performance Analysis de um capacete de proteção para motociclistas foi realizada, com o objetivo de determinar a força de controlo ótima exercida pelo material do forro interno deste equipamento sobre a cabeça do utilizador, em caso de impacto contra uma superfície rígida, com a finalidade de reduzir a severidade e probabilidade de ocorrência de lesão. Nesta análise, dois problemas de otimização foram abordados, o primeiro onde a energia total transmitida ao cérebro mediante impacto deve ser reduzida e o segundo em que o valor do Head Injury Criterion deve ser reduzido, ambos os problemas sujeitos a restrições associadas a outros critérios de lesão e desempenho. O modelo utilizado para simular o comportamento da cabeça é o Translational Head Injury Model, o qual é um modelo de parâmetros discretos. O impacto é realizado na direção Anterior-Posterior. A força de controlo ótima exercida na cabeça foi determinada para condições de impacto especificas. As soluções para o primeiro problema de otimização não cumpriram as restrições definidas. O segundo problema de otimização foi resolvido com sucesso, com os melhores resultados para uma espessura de forro interno de 30 mm.
The Limiting Performance Analysis of a protection helmet for motorcyclist was performed, with the aim of establishing the optimum control force exerted by the material of the inner liner of this equipment on the user’s head, in the event of impact against a rigid surface, with the purpose of reducing injury severity and occurrence probability. In this analysis, two optimization problems are addressed, the first where the total energy imparted to the brain upon impact must be minimized and the second where the value of the Head Injury Criterion must be minimized, both problems bound to restrictions associated with other injury and performance criteria. The model used to simulate the behaviour of the head is the Translational Head Injury Model, which is a lumped parameter model. The impact is performed in the Anterior-Posterior direction. The optimum control force exerted on the head was established for specific impact conditions. The solutions to the for the first optimization problem didn’t meet the restrictions defined. The second optimization problem was solved with success, with the best results por an inner liner thickness of 30 mm.
APA, Harvard, Vancouver, ISO, and other styles
16

El, KHOURY Hiba. "Introduction of New Products in the Supply Chain : Optimization and Management of Risks." Phd thesis, HEC, 2012. http://pastel.archives-ouvertes.fr/pastel-00708801.

Full text
Abstract:
Shorter product life cycles and rapid product obsolescence provide increasing incentives to introduce newproducts to markets more quickly. As a consequence of rapidly changing market conditions, firms focus onimproving their new product development processes to reap the benefits of early market entry. Researchershave analyzed market entry, but have seldom provided quantitative approaches for the product rolloverproblem. This research builds upon the literature by using established optimization methods to examine howfirms can minimize their net loss during the rollover process. Specifically, our work explicitly optimizes thetiming of removal of old products and introduction of new products, the optimal strategy, and the magnitudeof net losses when the market entry approval date of a new product is unknown. In the first paper, we use theconditional value at risk to optimize the net loss and investigate the effect of risk perception of the manageron the rollover process. We compare it to the minimization of the classical expected net loss. We deriveconditions for optimality and unique closed-form solutions for single and dual rollover cases. In the secondpaper, we investigate the rollover problem, but for a time-dependent demand rate for the second producttrying to approximate the Bass Model. Finally, in the third paper, we apply the data-driven optimizationapproach to the product rollover problem where the probability distribution of the approval date is unknown.We rather have historical observations of approval dates. We develop the optimal times of rollover and showthe superiority of the data-driven method over the conditional value at risk in case where it is difficult to guessthe real probability distribution
APA, Harvard, Vancouver, ISO, and other styles
17

Kysilko, Vít. "Optimalizace HIC kritéria při nárazu impaktorem hlavy na kapotu auta." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2013. http://www.nusl.cz/ntk/nusl-230463.

Full text
Abstract:
Kvůli stále značnému počtu usmrcených chodců při dopravních nechodách se výrobci automobilů snaží pomocí vhodné konstrukce automobilu zmírnit následky způsobené nehodou. Při srážce impaktoru hlavy s kapotou automobilu dochází ke zpomalení impaktoru a na toto zpomalení je aplikováno HIC kitérium. HIC kritérium vyhodnocuje míru možnosti poranění hlavy při srážce. Cílem diplomové práce je výběr nejvhodnějšího časového průběhu zpomalení dětského impaktoru hlavy při srážce s kapotou automobilu Škoda Superb II z hlediska HIC kritéria a navržení konstrukčních úprav kapoty a okolních částí, které se dostanou do kontaktu tak, aby se skutečný průběh blížil teoretické fyziologicky přípustné variantě. Pro výpočtové modelování úlohy byla použita explicitní varianta metody konečných prvků (MKP). V první části práce jsou analyzovány data ze simulací úderu impaktoru dětské hlavy na kapotu auta, zvlástě pak analýzy částí energie spotřebované kapotou při srážce s impaktorem. Tyto data byly poskytnuté firmou Škoda Auto a.s. Další část se zaobírá návrhem křivek zpomalení se sinusovým, čtvercovým a trojúhelníkým tvarem. Dále je také navržena 2 vrcholová trojúhelníhová křivka zpomalení, jež je pomocí parametrů modifikovatelná. Optimalizací aproximovaného modelu geometrie kapoty automobilu při dopadu impaktoru hlavy na navržený model geometrie kapoty je zajištěno podobné shody s dříve optimalizovanou 2 vrcholovou trojúhleníkovou křivkou zpomalení. V další části je použit originální model geometrie kapoty automobilu Škoda Superb II a další optimalizace modelu geometrie kapoty vůči optimálnímu časovému průběhu zpomalení.
APA, Harvard, Vancouver, ISO, and other styles
18

El-Khoury, Hiba. "Introduction of New Products in the Supply Chain : Optimization and Management of Risks." Thesis, Jouy-en Josas, HEC, 2012. http://www.theses.fr/2012EHEC0001/document.

Full text
Abstract:
Les consommateurs d’aujourd’hui ont des goûts très variés et cherchent les produits les plus récents. Avec l’accélération technologique, les cycles de vie des produits se sont raccourcis et donc, de nouveaux produits doivent être introduits au marché plus souvent et progressivement, les anciens doivent y être retirés. L’introduction d’un nouveau produit est une source de croissance et d’avantage concurrentiel. Les directeurs du Marketing et Supply Chain se sont confrontés à la question de savoir comment gérer avec succès le remplacement de leurs produits et d’optimiser les coûts de la chaîne d’approvisionnement associée. Dans une situation idéale, la procédure de rollover est efficace et claire: l’ancien produit est vendu jusqu’à une date prévue où un nouveau produit est introduit. Dans la vie réelle, la situation est moins favorable. Le but de notre travail est d’analyser et de caractériser la politique optimale du rollover avec une date de disponibilitéstochastique pour l’introduction du nouveau produit sur le marché. Pour résoudre le problème d’optimisation,nous utilisons dans notre premier article deux mesures de minimisation: le coût moyen et le coût de la valeurconditionnelle à risque. On obtient des solutions en forme explicite pour les politiques optimales. En outre, nous caractérisons l’influence des paramètres de coûts sur la structure de la politique optimale. Dans cet esprit, nous analysons aussi le comportement de la politique de rollover optimale dans des contextes différents. Dans notre deuxième article, nous examinons le même problème mais avec une demande constante pour le premier produit et une demande linéaire au début puis constante pour le deuxième. Ce modèle est inspiré par la demande de Bass. Dans notre troisième article, la date de disponibilité du nouveau produit existe mais elle est inconnue. La seule information disponible est un ensemble historique d’échantillons qui sont tirés de la vraie distribution. Nous résoudrons le problème avec l’approche data drivenet nous obtenons des formulations tractables. Nous développons aussi des bornes sur le nombre d’échantillons nécessaires pour garantir qu’avec une forte probabilité, le coût n’est pas très loin du vrai coût optimal
Shorter product life cycles and rapid product obsolescence provide increasing incentives to introduce newproducts to markets more quickly. As a consequence of rapidly changing market conditions, firms focus onimproving their new product development processes to reap the benefits of early market entry. Researchershave analyzed market entry, but have seldom provided quantitative approaches for the product rolloverproblem. This research builds upon the literature by using established optimization methods to examine howfirms can minimize their net loss during the rollover process. Specifically, our work explicitly optimizes thetiming of removal of old products and introduction of new products, the optimal strategy, and the magnitudeof net losses when the market entry approval date of a new product is unknown. In the first paper, we use theconditional value at risk to optimize the net loss and investigate the effect of risk perception of the manageron the rollover process. We compare it to the minimization of the classical expected net loss. We deriveconditions for optimality and unique closed-form solutions for single and dual rollover cases. In the secondpaper, we investigate the rollover problem, but for a time-dependent demand rate for the second producttrying to approximate the Bass Model. Finally, in the third paper, we apply the data-driven optimizationapproach to the product rollover problem where the probability distribution of the approval date is unknown.We rather have historical observations of approval dates. We develop the optimal times of rollover and showthe superiority of the data-driven method over the conditional value at risk in case where it is difficult to guessthe real probability distribution
APA, Harvard, Vancouver, ISO, and other styles
19

Куперман, В. В., В. В. Куперман, and V. Cooperman. "Оптимізація виробничої програми промислового підприємства." Diss., Одеський національний економічний університет, 2012. http://dspace.oneu.edu.ua/jspui/handle/123456789/3857.

Full text
Abstract:
Дисертація присвячена удосконаленню існуючих, а також розробці нових підходів до оптимізації виробничих планів промислових підприємств. Проаналізовані переваги та недоліки однокритеріальної оптимізації виробничої програми промислових підприємств. Обґрунтовано необхідність і доцільність впровадження в практику їх роботи багатокритеріальної оптимізації як такої, що найкращим чином відповідає місії та цілям сучасного підприємства. Висунуто наукову гіпотезу про латентний, прихований характер глобального економічного критерію діяльності промислового підприємства, що не піддається безпосередньому вимірюванню. Він проявляється на поверхні явищ у вигляді локальних критеріїв економічного ефекту або ефективності, роль яких виконують звичайні показники господарської діяльності: бухгалтерський прибуток, обсяг реалізації продукції, частка ринку даного підприємства, рентабельність виробництва або реалізованої продукції тощо. Указане теоретичне положення дозволило розробити методичні засади пошуку Парето-оптимальних рішень задачі багатокритеріальної оптимізації виробничої програми промислового підприємства за допомогою багатовимірного статистичного методу, який базується на кластерному і регресійному аналізі. Запропоновано блок-схему процедури оптимізації виробничої програми промислового підприємства на основі багатовимірного статистичного методу. Розроблені методичні засади оптимізації внутрішньовиробничих планів апробовані на підприємствах харчової промисловості та машинобудування України.
Диссертация посвящена совершенствованию существующих, а также разработке новых подходов к оптимизации производственных планов промышленных предприятий. Проведенное исследование показало, что среди современных ученых нет единого мнения относительно проведения оптимизации производственной программы предприятия. Главные разногласия касаются проблемы выбора между традиционной однокритериальной и современной многокритериальной моделями оптимизации задач при планировании производства. В настоящее время наиболее распространенной является однокритериальная оптимизация производственной программы предприятия. При этом в качестве критерия оптимальности обычно применяется один из показателей бухгалтерской прибыли предприятия. Критики такого подхода справедливо указывают на его существенные недостатки и, прежде всего, на вынужденную упрощенность миссии предприятия, в том числе и экономической, полное игнорирование неэкономических целей субъектов хозяйствования. В результате полученные рекомендации часто теряют практическую ценность, и этим объясняется, в частности, почему реальная экономическая практика относится к ним весьма скептически. Многоцелевой подход к разработке производственной программы является одним из методов решения естественных противоречий, возникающих в планово-экономической работе, поскольку в большинстве случаев нецелесообразно направлять усилия на достижение только одной цели, часто локальной, но необходимо стремиться разработать качественный план, ориентированный на достижение нескольких важных целей – глобального критерия оптимальности промышленного предприятия. В работе выдвинуто научную гипотезу о латентном, скрытом характере глобального критерия деятельности промышленного предприятия, который не поддается непосредственному измерению. Он проявляется на поверхности экономических явлений в виде локальных критериев экономического эффекта и эффективности, роль которых выполняют обычные показатели хозяйственной деятельности: бухгалтерская прибыль, объем реализации продукции, доля рынка данного предприятия, рентабельность производства или реализованной продукции. Указанное теоретические положение позволило разработать методические основы поиска Парето-оптимальных решений задачи многокритериальной оптимизации производственной программы в рамках аддитивного подхода к определению глобального критерия деятельности промышленного предприятия. С помощью предложенного многомерного статистического метода, который базируется на кластерном и регрессионном анализе, оценивается роль отдельных локальных критериев в формировании глобального критерия. Это открыло возможность свести задачу много-критериальной оптимизации производственной программы промышленного к решению ряда задач однокритериальной линейной оптимизации с помощью симплекс-метода. Разработанные методические основы оптимизации внутрипроизводственных планов апробированы на предприятиях пищевой промышленности и машиностроения Украины.
The dissertation is devoted to the improvement of existing and development of new approaches to optimization of production plans of industrial enterprises. Analyses advantages and disadvantages of optimization by one criterion of the production program of the industrial enterprise. The necessity and expediency of introduction in practice of their work for multi-criteria optimization, which in the best way corresponds to the mission and objectives of a modern enterprise. Put forward a scientific hypothesis about latent, hidden nature of the global criterion of the activity of industrial enterprises, which cannot be measured directly. It appears on the surface of the economic phenomena in the form of local criteria of economic effect and efficiency, which perform the role of the usual indicators of economic activity of the accounting profit, sales volume, market share of the enterprise, profitability of production or sales. The theoretical position made it possible to develop methodological basics of search of Pareto-optimal solutions of the problem of multicriterion optimization of the production program of the industrial enterprises with the help of multivariate statistical method, which is based on the cluster and regression analysis. Offered a block diagram of the processes of optimization of the production program of the industrial enterprise on the basis of multivariate statistical method. Developed methodical fundamentals of optimization of intra-industrial plans approved at the enterprises of food industry and mechanical engineering of Ukraine.
APA, Harvard, Vancouver, ISO, and other styles
20

Benassi, Romain. "Nouvel algorithme d'optimisation bayésien utilisant une approche Monte-Carlo séquentielle." Phd thesis, Supélec, 2013. http://tel.archives-ouvertes.fr/tel-00864700.

Full text
Abstract:
Ce travail de thèse s'intéresse au problème de l'optimisation globale d'une fonction coûteuse dans un cadre bayésien. Nous disons qu'une fonction est coûteuse lorsque son évaluation nécessite l'utilisation de ressources importantes (simulations numériques très longues, notamment). Dans ce contexte, il est important d'utiliser des algorithmes d'optimisation utilisant un faible nombre d'évaluations de cette dernière. Nous considérons ici une approche bayésienne consistant à affecter à la fonction à optimiser un a priori sous la forme d'un processus aléatoire gaussien, ce qui permet ensuite de choisir les points d'évaluation de la fonction en maximisant un critère probabiliste indiquant, conditionnellement aux évaluations précédentes, les zones les plus intéressantes du domaine de recherche de l'optimum. Deux difficultés dans le cadre de cette approche peuvent être identifiées : le choix de la valeur des paramètres du processus gaussien et la maximisation efficace du critère. La première difficulté est généralement résolue en substituant aux paramètres l'estimateur du maximum de vraisemblance, ce qui est une méthode peu robuste à laquelle nous préférons une approche dite complètement bayésienne. La contribution de cette thèse est de présenter un nouvel algorithme d'optimisation bayésien, maximisant à chaque étape le critère dit de l'espérance de l'amélioration, et apportant une réponse conjointe aux deux difficultés énoncées à l'aide d'une approche Sequential Monte Carlo. Des résultats numériques, obtenus à partir de cas tests et d'applications industrielles, montrent que les performances de notre algorithme sont bonnes par rapport à celles d'algorithmes concurrents.
APA, Harvard, Vancouver, ISO, and other styles
21

Horalík, Jan. "VÍCEKRITERIÁLNÍ OPTIMALIZACE VE VÝNOSOVÉM OCEŇOVÁNÍ NEMOVITOSTÍ." Doctoral thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2016. http://www.nusl.cz/ntk/nusl-234321.

Full text
Abstract:
The income evaluation is one of the basic methods to establish a price of a real estate. It deals with a discount rate. But there is any obligatory methods how to establish the discount rate. The principle of yield property valuation method is the determination of future net profits transferred to the present value. The amount of the discount rate is affected by the large amount of criteria that take into account the risks associated with the property. The risk represents the financial loss which the owner of real estate created if the immovable thing ceased to produce such income, which is calculated in the valuation. But at present experts the risks associated with the real estate does not quantify and discount rate is determined mostly by the professional estimate. The main aim of the Ph.D. Theses is to propose a methodology to more accurately determine the discount rate. This methodology will be based on the free risk rate and risk premiums. The free risk rate shall be determined on the basis of income on government bonds, which are considered the least risky asset. Risk premiums will reflect the technical quality of the property, economy of real estate and legal level of real estate through eleven criteria. The discount rate could be by this methodology simply calculated using the software support of Microsoft Excel.
APA, Harvard, Vancouver, ISO, and other styles
22

Castro, Carlos. "Multiple criteria optimization in injection molding." Connect to this title online, 2004. http://hdl.handle.net/1811/322.

Full text
Abstract:
Thesis (Honors)--Ohio State University, 2004.
Title from first page of PDF file. Document formattted into pages: contains vi, 49 p.; also includes graphics. Includes bibliographical references (p. 46). Available online via Ohio State University's Knowledge Bank.
APA, Harvard, Vancouver, ISO, and other styles
23

Singh, Vijay K. "Equitable efficiency in multiple criteria optimization." Connect to this title online, 2007. http://etd.lib.clemson.edu/documents/1181669435/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Dody, Thibault Alexandre. "Damping optimization using transfer function criteria." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82709.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2013.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (p. 84-85).
Seismic performance has become a key point in the design of every type of structures. Even the simplest buildings need protection in areas of high seismic activity.However,there is no method defined!by codes or general knowledge to help engineers make choices about the design of seismic protection devices. Even!though several theories to optimize the use of devices have been developed, there is little practical application in!structural engineering. The purpose of this paper is first to settle on the elements that can be used to protect structures. By looking at their effects on structures, it was found that dampers are the easiest to use in an optimization process. After describing the need of progress in the field of earthquake protections, this paper focuses on the impact of additional damping in a 2-D frame. Finally, the method developed by Izuru Takewaki was studied and implemented. By looking at the limitation of the interstory drift, the algorithm produced the optimal distribution of the damping. In order to estimate the performance of! the method, the results were compared to empirical damping distributions. A complete program was developed in order to apply the optimization method to a wide range of custom 2-D frames.
by Thibault Alexandre Dody.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
25

Mrabet, Elyes. "Optimisation de la fiabilité des structures contrôlées." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEC011/document.

Full text
Abstract:
Le présent travail traite l’optimisation des paramètres des amortisseurs à masses accordées (AMA) accrochés sur des structures, linéaires. Les AMAs sont des dispositifs de contrôle passif utilisés pour atténuer les vibrations induites par des chargements dynamiques (en particulier stochastiques) appliqués sur des structures. L’efficacité de tels dispositifs est étroitement liée aux caractéristiques dynamiques qu’on doit imposer à ces systèmes. Dans ce cadre, plusieurs stratégies d’optimisation peuvent être utilisées dans des contextes déterministes et non déterministes, où les paramètres de la structure à contrôler sont incertains. Parmi les différentes approches qu’on peut trouver dans la littérature, l’optimisation structurale stochastique (OSS) et l’optimisation basée sur la fiabilité (OBF) étaient particulièrement traitées dans le présent travail.Dans la première partie de ce travail, en plus de la nature stochastique des chargements extérieurs appliqués à la structure linéaire à contrôler, la présence de paramètres structuraux de type incertains mais bornés (IMB) est prise en considération et les bornes optimales des paramètres AMA ont été calculées. Le calcul de ces bornes a été fait en utilisant une technique basée sur un développement de Taylor suivi d’une extension aux intervalles. La technique, permettant l’obtention d’une approximation des bornes optimales, a été appliquée dans les cas d’un système à un degré de liberté (1DDL) et un autre à plusieurs degrés de libertés (nDDL). Les résultats obtenus ont montrés que la technique utilisée était bien adaptée pour la stratégie OSS et elle l’est moins pour l’approche OBF.Comme suite logique aux résultats de la première partie, la seconde partie de la présente dissertation est consacrée à la présentation de deux méthodes permettant l’obtention des bornes exactes et des bornes approximées des paramètres optimaux de l’AMA et ce, en présence de paramètres structuraux de type IMB. La première méthode est celle de la boucle d’optimisation continue imbriquée, la seconde est celle des extensions aux intervalles basées sur la monotonie. Les méthodes présentées, qui ont été appliquées avec l’approche OBF, sont valables pour n’importe quel problème d’optimisation faisant intervenir des paramètres de type IMB. Mis à part le calcul de bornes optimisées du dispositif AMA, la question de la robustesse, vis-à-vis des incertitudes structurales, a été également traitée et il a été prouvé que la solution optimale correspondante au contexte déterministe était la plus robuste.L’introduction d’une nouvelle stratégie OBF des paramètres AMA a fait l’objet de la troisième partie de cette dissertation. En effet, un problème OBF est toujours relié à un mode de défaillance caractérisé par le franchissement d’une certaine réponse, de la structure à contrôler, d’un certain seuil limite pendant une certaine durée de temps. Le nouveau mode de défaillance, correspondant à la nouvelle stratégie OBF, consiste à considérer qu’une défaillance ait lieu lorsque la puissance dissipée au niveau de la structure à contrôler, pendant une période de temps, excède une certaine valeur. Faisant intervenir l’approche par franchissement ainsi que la formule de Rice, la nouvelle stratégie a été appliquée dans le cas d’un système 1DDL et l’expression exacte de la probabilité de défaillance est calculée. En se basant sur une approximation mettant en œuvre la technique du minimum d’entropie croisé, la nouvelle stratégie a été, également, appliquée dans le cas d’un système à nDDL et les résultats obtenus ont montrés la supériorité de cette stratégie par rapports à deux autres tirées de la bibliographie
The present work deals with the parameters optimization of tuned mass dampers (TMD) used in the control of vibrating linear structures under stochastic loadings. The performance of the TMD device is deeply affected by its parameters that should be carefully chosen. In this context, several optimization strategies can be found in the literature and among them the stochastic structural optimization (SSO) and the reliability based optimization (RBO) are particularly addressed in this dissertation.The first part of this work in dedicated to the calculation of the optimal bounds solutions of the TMD parameters in presence of uncertain but bounded (UBB) structural parameters. The bounds of the optimal TMD parameters are obtained using an approximation technique based on Taylor expansion followed by interval extension. The numerical investigations applied with one degree of freedom (1DOF) and with multi-degree of freedom (multi-DOF) systems showed that the studied technique is suitable for the SSO strategy and that it’s less appropriate for the RBO strategy.As immediate consequence of the obtained results in the first part of this work, in the second part a method, called the continuous-optimization nested loop method (CONLM), providing the exact range of the optimal TMD parameters is presented and validated. The numerical studies demonstrated that the CONLM is time consuming and to overcome this disadvantage, a second method is also presented. The second method is called the monotonicity based extension method (MBEM) with box splitting. Both methods have been applied in the context of the RBO strategy with 1DOF and multi-DOF systems. The issue of effectiveness and robustness of the presented optimum bounds of the TMD parameters is also addressed and it has been demonstrated that the optimum solution corresponding to the deterministic context (deterministic structural parameters) provide good effectiveness and robustness.Another aspect of RBO approach is dealt in the third part of the present work. Indeed, a new RBO strategy of TMD parameters based on energetic criterion is presented and validated. The new RBO approach is linked to a new failure mode characterized by the exceedance of the power dissipated into the controlled structure over a certain threshold during some interval time. Based on the outcrossing approach and the Rice’s formula, the new strategy is firstly applied to 1DOF system and exact expression of the failure probability is calculated. After that, a multi-DOF system is considered and the minimum cross entropy method has been used providing an approximation to the failure probability and then the optimization is carried out. The numerical investigations showed the superiority of the presented strategy when compared with other from the literature
APA, Harvard, Vancouver, ISO, and other styles
26

Kudikala, Rajesh. "System architecture design using multi-criteria optimization." Thesis, University of Sheffield, 2015. http://etheses.whiterose.ac.uk/9703/.

Full text
Abstract:
System architecture is defined as the description of a complex system in terms of its functional requirements, physical elements and their interrelationships. Designing a complex system architecture can be a difficult task involving multi-faceted trade-off decisions. The system architecture designs often have many project-specific goals involving mix of quantitative and qualitative criteria and a large design trade space. Several tools and methods have been developed to support the system architecture design process in the last few decades. However, many conventional problem solving techniques face difficulties in dealing with complex system design problems having many goals. In this research work, an interactive multi-criteria design optimization framework is proposed for solving many-objective system architecture design problems and generating a well distributed set of Pareto optimal solutions for these problems. System architecture design using multi-criteria optimization is demonstrated using a real-world application of an aero engine health management (EHM) system. A design process is presented for the optimal deployment of the EHM system functional operations over physical architecture subsystems. The EHM system architecture design problem is formulated as a multi-criteria optimization problem. The proposed methodology successfully generates a well distributed family of Pareto optimal architecture solutions for the EHM system, which provides valuable insights into the design trade-offs. Uncertainty analysis is implemented using an efficient polynomial chaos approach and robust architecture solutions are obtained for the EHM system architecture design. Performance assessment through evaluation of benchmark test metrics demonstrates the superior performance of the proposed methodology.
APA, Harvard, Vancouver, ISO, and other styles
27

Pissarides, Savvas. "Interactive multiple criteria optimization for capital budgeting." Thesis, University of Ottawa (Canada), 1992. http://hdl.handle.net/10393/7723.

Full text
Abstract:
This thesis presents a capital budgeting problem faced by a major telecommunications company. The purpose of this thesis is to address the capital budgeting problem in order to establish a framework for the measurement and evaluation of alternative capital allocation decisions which are compatible with the mission of the company. The solution method follows three major avenues of optimization: multiple criteria, multiple constraints and interactivity. The problem is solved using the Analytic Hierarchy Process to obtain an initial solution which is then improved by an interactive method allowing users to direct the search for an acceptable allocation. The method is implemented by a decision support system hinging on a graphic user interface. The support system has been used by practitioners to evaluate alternatives of a real problem. Results and enhancements are discussed.
APA, Harvard, Vancouver, ISO, and other styles
28

Soylu, Banu. "An Evolutionary Algorithm For Multiple Criteria Problems." Phd thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/2/12608134/index.pdf.

Full text
Abstract:
In this thesis, we develop an evolutionary algorithm for approximating the Pareto frontier of multi-objective continuous and combinatorial optimization problems. The algorithm tries to evolve the population of solutions towards the Pareto frontier and distribute it over the frontier in order to maintain a well-spread representation. The fitness score of each solution is computed with a Tchebycheff distance function and non-dominating sorting approach. Each solution chooses its own favorable weights according to the Tchebycheff distance function. Some seed solutions at initial population and a crowding measure also help to achieve satisfactory results. In order to test the performance of our evolutionary algorithm, we use some continuous and combinatorial problems. The continuous test problems taken from the literature have special difficulties that an evolutionary algorithm has to deal with. Experimental results of our algorithm on these problems are provided. One of the combinatorial problems we address is the multi-objective knapsack problem. We carry out experiments on test data for this problem given in the literature. We work on two bi-criteria p-hub location problems and propose an evolutionary algorithm to approximate the Pareto frontiers of these problems. We test the performance of our algorithm on Turkish Postal System (PTT) data set (TPDS), AP (Australian Post) and CAB (US Civil Aeronautics Board) data sets. The main contribution of this thesis is in the field of developing a multi-objective evolutionary algorithm and applying it to a number of multi-objective continuous and combinatorial optimization problems.
APA, Harvard, Vancouver, ISO, and other styles
29

Anil, Kivanc Ali. "Multi-criteria analysis in Naval Ship Design /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Mar%5FAnil.pdf.

Full text
Abstract:
Thesis (M.S. in Mechanical Engineering)--Naval Postgraduate School, March 2005.
Thesis Advisor(s): Fotis Papoulias, Roman B. Statnikov. Includes bibliographical references (p. 241). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
30

Anil, Kivanc A. "Multi-criteria analysis in naval ship design." Thesis, Monterey California. Naval Postgraduate School, 2005. http://hdl.handle.net/10945/2325.

Full text
Abstract:
Approved for public release, distribution is unlimited
Numerous optimization problems involve systems with multiple and often contradictory criteria. Such contradictory criteria have been an issue for marine/naval engineering design studies for many years. This problem becomes more important when one considers novel ship types with very limited or no operational record. A number of approaches have been proposed to overcome these multiple criteria design optimization problems. This Thesis follows the Parameter Space Investigation (PSI) technique to address these problems. The PSI method is implemented with a software package called MOVI (Multi-criteria Optimization and Vector Identification). Two marine/naval engineering design optimization models were investigated using the PSI technique along with the MOVI software. The first example was a bulk carrier design model which was previously studied with other optimization methods. This model, which was selected due to its relatively small dimensionality and the availability of existing studies, was utilized in order to demonstrate and validate the features of the proposed approach. A more realistic example was based on the "MIT Functional Ship Design Synthesis Model" with a greater number of parameters, criteria, and functional constraints. A series of optimization studies conducted for this model demonstrated that the proposed approach can be implemented in a naval ship design environment and can lead to a large design parameter space exploration with minimum computational effort.
Lieutenant Junior Grade, Turkish Navy
APA, Harvard, Vancouver, ISO, and other styles
31

Cortes, Quiroz C. A. "Design, analysis and multi-criteria optimization of micromixers." Thesis, University College London (University of London), 2012. http://discovery.ucl.ac.uk/1357309/.

Full text
Abstract:
Mixing is a key process in microfluidic systems since that samples and reagents generally need to be mixed thoroughly before chemical or biological analysis or reactions. Micromixers are designed to fulfil this critical process. In general, the development of microdevices is a competitive field that requires from researchers shorter times and lower costs in prototyping. Computational Fluid Dynamics (CFD) helps in reducing the time from concept to device design. Intuition and experience of the designer is usually behind its application on design improvement, by analyzing some physical variables to determine the effect of design parameters and to adjust them accordingly to the pursued objectives. In this thesis, a design and optimization strategy is presented and used for the analysis and design of micromixers. The method systematically integrates CFD with an optimization strategy based on the use of Design of Experiments, Surrogate Modelling and Multi-Objective Genetic Algorithm techniques. The aim is to define optimum designs that give the trade-off of the performance parameters, which in this study are the mixing index, defined on the basis of mass concentration distribution, and the pressure drop in the microchannel. Three types of micromixers have been studied and their geometric parameters have been optimized. They are the Staggered Herringbone Mixer and two novel designs, a planar micromixer with baffles in the microchannel and a 3-D T-type micromixer. A completed fabrication method was implemented as part of this thesis work and it was used to fabricate some of the micromixers. Experimental measurements and published data have been used to validate the numerical results. The outcomes of this thesis demonstrate that using advanced optimisation techniques on the basis of CFD solutions and analyses allows the design of optimum micromixers for different operation conditions, which can be set by the designer, without being necessary to use a referential design to start the method.
APA, Harvard, Vancouver, ISO, and other styles
32

Safer, Hershel M. "Fast approximation schemes for multi-criteria combinatorial optimization." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/13155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Parr, James. "Improvement criteria for constraint handling and multiobjective optimization." Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/349978/.

Full text
Abstract:
In engineering design, it is common to predict performance based on complex computer codes with long run times. These expensive evaluations can make automated and wide ranging design optimization a difficult task. This becomes even more challenging in the presence of constraints or conflicting objectives. When the design process involves expensive analysis, surrogate (response surface or meta) models can be adapted in different ways to efficiently converge towards global solutions. A popular approach involves constructing a surrogate based on some initial sample evaluated using the expensive analysis. Next, some statistical improvement criterion is searched inexpensively to find model update points that offer some design improvement or model refinement. These update points are evaluated, added to the set of initial designs and the process is repeated with the aim of converging towards the global optimum. In constrained problems, the improvement criterion is required to update the surrogate models in regions that offer both objective and constraint improvement whilst converging toward the best feasible optimum. In multiobjective problems, the aim is to update the surrogates in such a way that the evaluated points converge towards a spaced out set of Pareto solutions. This thesis investigates efficient improvement criteria to address both of these situations. This leads to the development of an improvement criterion that better balances improvement of the objective and all the constraint approximations. A goal-based approach is also developed suitable for expensive multiobjective problems. In all cases, improvement criteria are encouraged to select multiple updates, enabling designs to be evaluated in parallel, further accelerating the optimization process.
APA, Harvard, Vancouver, ISO, and other styles
34

Martin, Megan Wydick. "Computational Studies in Multi-Criteria Scheduling and Optimization." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/78699.

Full text
Abstract:
Multi-criteria scheduling provides the opportunity to create mathematical optimization models that are applicable to a diverse set of problem domains in the business world. This research addresses two different employee scheduling applications using multi-criteria objectives that present decision makers with trade-offs between global optimality and the level of disruption to current operating resources. Additionally, it investigates a scheduling problem from the product testing domain and proposes a heuristic solution technique for the problem that is shown to produce very high-quality solutions in short amounts of time. Chapter 2 addresses a grant administration workload-to-staff assignment problem that occurs in the Office of Research and Sponsored Programs at land-grant universities. We identify the optimal workload assignment plan which differs considerably due to multiple reassignments from the current state. To achieve the optimal workload reassignment plan we demonstrate a technique to identify the n best reassignments from the current state that provides the greatest progress toward the utopian solution. Solving this problem over several values of n and plotting the results allows the decision maker to visualize the reassignments and the progress achieved toward the utopian balanced workload solution. Chapter 3 identifies a weekly schedule that seeks the most cost-effective set of coach-to-program assignments in a gymnastics facility. We identify the optimal assignment plan using an integer linear programming model. The optimal assignment plan differs greatly from the status quo; therefore, we utilize a similar approach from Chapter 2 and use a multiple objective optimization technique to identify the n best staff reassignments. Again, the decision maker can visualize the trade-off between the number of reassignments and the resulting progress toward the utopian staffing cost solution and make an informed decision about the best number of reassignments. Chapter 4 focuses on product test scheduling in the presence of in-process and at-completion inspection constraints. Such testing arises in the context of the manufacture of products that must perform reliably in extreme environmental conditions. Each product receives a certification at the successful completion of a predetermined series of tests. Operational efficiency is enhanced by determining the optimal order and start times of tests so as to minimize the make span while ensuring that technicians are available when needed to complete in-process and at-completion inspections We first formulate a mixed-integer programming model (MILP) to identify the optimal solution to this problem using IBM ILOG CPLEX Interactive Optimizer 12.7. We also present a genetic algorithm (GA) solution that is implemented and solved in Microsoft Excel. Computational results are presented demonstrating the relative merits of the MILP and GA solution approaches across a number of scenarios.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
35

Ягуп, Катерина Валеріївна. "Покращання енергетичних показників електротехнічних систем із застосуванням пошукової оптимізації на комп'ютерних моделях." Thesis, НТУ "ХПІ", 2018. http://repository.kpi.kharkov.ua/handle/KhPI-Press/35541.

Full text
Abstract:
Дисертація на здобуття наукового ступеня доктора технічних наук за спеціальністю 05.09.03 – електротехнічні комплекси та системи. – Національний технічний університет "Харківський політехнічний інститут", Харків, 2018. Розглядаються методи оптимізації режимів систем електропостачання з несиметричними і нелінійними навантаженнями з метою підвищення енергетичних показників і розрахунку симетро-компенсуючих пристроїв. Обґрунтовано необхідність і можливість застосування комп'ютерних засобів, розроблено узагальнені алгоритми реалізації пошукової оптимізації. Розроблені методи успішно застосовані для оптимізації режимів в трипровідних і чотирипровідних системах, в тому числі в системах з взаємно-зв'язаними індуктивностями, в системах залізничного електропостачання, в системі з нейтралером, в системах живлення асинхронних двигунів, освітлювальних приладів високого тиску, в системах з силовими активними фільтрами, а також для випадків кількох навантажень з урахуванням вкладу кожного з них в зниження енергетичних показників системи.
Thesis for a Doctor’s degree in Engineering Science by specialty 05.09.03 – Electrical Engineering Complexes and Systems. – National Technical University "Kharkov Polytechnic Institute", Kharkiv, 2018. The dissertation is devoted to the development and research of optimization methods of power supply systems modes in electrotechnical systems with asymmetric and nonlinear loads in order to increase the energy indices and calculate the parameters of symmetry-compensating devices using mathematical and computer models, and using search engine optimization implemented with the use of modern software of computer mathematics. The necessity and the possibility of using computer tools for solving set tasks are substantiated. The generalized algorithms for implementing search engine optimization using modern software packages are developed. The possibilities of applying different optimization criteria for solving the problems of increasing the energy indices of power supply systems with asymmetric and nonlinear loads are shown. The properties of the search engine optimization system have been found to extrude the inappropriate element of the synthesized device, as well as the possibility of releasing the optimization variables by increasing their amount, which allows us to get ahead of a faster locating of the local minimum and then recalculate the parameters corresponding to the global minimum are shown. The developed method of search optimization using the models of power supply systems has been successfully applied for optimization of regimes and synthesis of symmetry-compensating devices in three-phase three-wire and four-wire systems of power supply. The possibilities of using the optimization tools for Mathcad and Matlab software packages are considered, in particular, using zero-order methods that do not require the calculation of derivatives, such as the deformed polyhedron method and the conjugate gradient method. The algorithm of load equivalence is proposed, with the help of which the symmetrical and asymmetric parts of the load are allocated. After this, the parameters of the balancing device are determined with sufficient accuracy by means of the Steinmetz and Kennely formulas. The method of currents direct symmetric component rotation with the preservation of symmetry and the mode of full reactive power compensation is proposed. For four-wire systems, the use of a generalized reactive element in a symmetry-compensating device is proposed, which accelerates the process of achieving the optimal solution. The method of determining the optimal mode based on the decomposition of the power supply system, which improves the convergence of the solution processes, is developed. Systems of power supply containing inductively coupled elements are considered. The calculation of the symmetry-compensating device of the traction system of the alternating current railway power supply is considered. A four-wire system with a neutralizer was studied, with the help of search engine optimization the parameters of the symmetry-compensating device were determined which allows to balance and counterbalance such a system. The possibilities of optimization of the regime in the power system of asynchronous motors, including the asymmetry of the supply network, are shown. Compensation of reactive power allows here to reduce the consumed currents and increase the efficiency of the system. To find the optimal modes of systems with an arc discharge, visual models have been developed that are adapted for use with the SimPowerSystem library elements. With the help of these models, the possibilities of increasing the power indices of arc discharge power supply systems, including high-pressure lighting devices, are investigated. It was shown that the optimization of the power factor alone, calculated with the help of the pro-posed methods, leads to decrease in the current consumed by the fundamental harmonic, which substantially reduces the losses in the transmission lines. For a thyristor compensator with single-stage switching, the advantage of symmetric control is proven, which greatly improves the spectrum of harmonics of supply currents. The use of the search optimization method to increase the power factor is shown without the use of traditional rather complicated control systems by power active filters. Comparison signals are used as control signals, synchronized with the phase voltages of the supply system. The amplitudes of these signals are accepted as optimization variables, and the optimization criterion is determined by the balance of active power in the system, which is characterized by the stabilization of the periodic voltage on the storage capacitor of the power active filter. The problems of synthesis of symmetric-compensating devices for several asymmet-rical loads in parallel and cascade connection are considered. The task is to determine the parameters of the symmetric-compensating devices for each of the loads separately, and the contribution to the creation of asymmetry and the generation of reactive power of each connected load must be taken into account. This problem is solved by the method of search optimization, and it is shown that, in forming the objective function, currents in the feeders, supplying energy from the point of connection of the load to the network to the common point of connection of the load and the symmetric-compensating device. It is effective to use the developed decomposition method, which makes it possible to simplify and accelerate the determination of the optimal regime of the system under study, taking into account the contribution of each load to the reduction of the energy parameters of the system as a whole. The case is also analyzed when two loads consisting of both unbalanced linear and nonlinear loads are simultaneously connected to the network. Optimization of the regime with increasing power factor is achieved by using a parallel power active filter with control over the proposed optimization algorithm. Methods and algorithms of search optimization developed for the purposes of increasing the energy indicators of power supply systems with asymmetric and nonlinear consumers developed and presented in the thesis work are characterized by high accuracy, the maximum possible use of computer technology, low computer time and the possibility of complete automation of design and research procedures in solving theoretical and practical tasks related to increasing energy performance and quality of electrical energy in power supply systems.
APA, Harvard, Vancouver, ISO, and other styles
36

Ягуп, Катерина Валеріївна. "Покращання енергетичних показників електротехнічних систем із застосуванням пошукової оптимізації на комп'ютерних моделях." Thesis, Харківський національний університет міського господарства ім. О. М. Бекетова, 2017. http://repository.kpi.kharkov.ua/handle/KhPI-Press/35543.

Full text
Abstract:
Дисертація на здобуття наукового ступеня доктора технічних наук за спеціальністю 05.09.03 – електротехнічні комплекси та системи. – Національний технічний університет "Харківський політехнічний інститут", Харків, 2018. Розглядаються методи оптимізації режимів систем електропостачання з несиметричними і нелінійними навантаженнями з метою підвищення енергетичних показників і розрахунку симетро-компенсуючих пристроїв. Обґрунтовано необхідність і можливість застосування комп'ютерних засобів, розроблено узагальнені алгоритми реалізації пошукової оптимізації. Розроблені методи успішно застосовані для оптимізації режимів в трипровідних і чотирипровідних системах, в тому числі в системах з взаємно-зв'язаними індуктивностями, в системах залізничного електропостачання, в системі з нейтралером, в системах живлення асинхронних двигунів, освітлювальних приладів високого тиску, в системах з силовими активними фільтрами, а також для випадків кількох навантажень з урахуванням вкладу кожного з них в зниження енергетичних показників системи.
Thesis for a Doctor’s degree in Engineering Science by specialty 05.09.03 – Electrical Engineering Complexes and Systems. – National Technical University "Kharkov Polytechnic Institute", Kharkiv, 2018. The dissertation is devoted to the development and research of optimization methods of power supply systems modes in electrotechnical systems with asymmetric and nonlinear loads in order to increase the energy indices and calculate the parameters of symmetry-compensating devices using mathematical and computer models, and using search engine optimization implemented with the use of modern software of computer mathematics. The necessity and the possibility of using computer tools for solving set tasks are substantiated. The generalized algorithms for implementing search engine optimization using modern software packages are developed. The possibilities of applying different optimization criteria for solving the problems of increasing the energy indices of power supply systems with asymmetric and nonlinear loads are shown. The properties of the search engine optimization system have been found to extrude the inappropriate element of the synthesized device, as well as the possibility of releasing the optimization variables by increasing their amount, which allows us to get ahead of a faster locating of the local minimum and then recalculate the parameters corresponding to the global minimum are shown. The developed method of search optimization using the models of power supply systems has been successfully applied for optimization of regimes and synthesis of symmetry-compensating devices in three-phase three-wire and four-wire systems of power supply. The possibilities of using the optimization tools for Mathcad and Matlab software packages are considered, in particular, using zero-order methods that do not require the calculation of derivatives, such as the deformed polyhedron method and the conjugate gradient method. The algorithm of load equivalence is proposed, with the help of which the symmetrical and asymmetric parts of the load are allocated. After this, the parameters of the balancing device are determined with sufficient accuracy by means of the Steinmetz and Kennely formulas. The method of currents direct symmetric component rotation with the preservation of symmetry and the mode of full reactive power compensation is proposed. For four-wire systems, the use of a generalized reactive element in a symmetry-compensating device is proposed, which accelerates the process of achieving the optimal solution. The method of determining the optimal mode based on the decomposition of the power supply system, which improves the convergence of the solution processes, is developed. Systems of power supply containing inductively coupled elements are considered. The calculation of the symmetry-compensating device of the traction system of the alternating current railway power supply is considered. A four-wire system with a neutralizer was studied, with the help of search engine optimization the parameters of the symmetry-compensating device were determined which allows to balance and counterbalance such a system. The possibilities of optimization of the regime in the power system of asynchronous motors, including the asymmetry of the supply network, are shown. Compensation of reactive power allows here to reduce the consumed currents and increase the efficiency of the system. To find the optimal modes of systems with an arc discharge, visual models have been developed that are adapted for use with the SimPowerSystem library elements. With the help of these models, the possibilities of increasing the power indices of arc discharge power supply systems, including high-pressure lighting devices, are investigated. It was shown that the optimization of the power factor alone, calculated with the help of the pro-posed methods, leads to decrease in the current consumed by the fundamental harmonic, which substantially reduces the losses in the transmission lines. For a thyristor compensator with single-stage switching, the advantage of symmetric control is proven, which greatly improves the spectrum of harmonics of supply currents. The use of the search optimization method to increase the power factor is shown without the use of traditional rather complicated control systems by power active filters. Comparison signals are used as control signals, synchronized with the phase voltages of the supply system. The amplitudes of these signals are accepted as optimization variables, and the optimization criterion is determined by the balance of active power in the system, which is characterized by the stabilization of the periodic voltage on the storage capacitor of the power active filter. The problems of synthesis of symmetric-compensating devices for several asymmet-rical loads in parallel and cascade connection are considered. The task is to determine the parameters of the symmetric-compensating devices for each of the loads separately, and the contribution to the creation of asymmetry and the generation of reactive power of each connected load must be taken into account. This problem is solved by the method of search optimization, and it is shown that, in forming the objective function, currents in the feeders, supplying energy from the point of connection of the load to the network to the common point of connection of the load and the symmetric-compensating device. It is effective to use the developed decomposition method, which makes it possible to simplify and accelerate the determination of the optimal regime of the system under study, taking into account the contribution of each load to the reduction of the energy parameters of the system as a whole. The case is also analyzed when two loads consisting of both unbalanced linear and nonlinear loads are simultaneously connected to the network. Optimization of the regime with increasing power factor is achieved by using a parallel power active filter with control over the proposed optimization algorithm. Methods and algorithms of search optimization developed for the purposes of increasing the energy indicators of power supply systems with asymmetric and nonlinear consumers developed and presented in the thesis work are characterized by high accuracy, the maximum possible use of computer technology, low computer time and the possibility of complete automation of design and research procedures in solving theoretical and practical tasks related to increasing energy performance and quality of electrical energy in power supply systems.
APA, Harvard, Vancouver, ISO, and other styles
37

Filatovas, Ernestas. "Solving Multiple Criteria Optimization Problems in an Interactive Way." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2012. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2012~D_20120402_093953-80981.

Full text
Abstract:
In practice, optimization problems are often multiple criteria. The criteria are usually contradictory, so the final decision depends on a decision maker. When the problem is solved interactively, the decision maker can change his/her preferences in decision process. Moreover, it is important to obtain solutions from the whole Pareto front. A decision support system adapted to the specific of the problem is essential for solving multiple criteria optimization problems interactively. The objects of research are multiple criteria optimization problems, interactive methods for solving these problems, interactive decision support systems, and application of parallel computing in decision support systems. Multiple criteria optimization methods are analyzed in the dissertation. The focus of attention is the methods for a uniform distribution of solutions on the Pareto front as well as the interactive methods. An interactive way for solving multicriteria optimization problems, which finds alternative solutions uniformly distributed on the Pareto front is proposed and investigated in this dissertation. An interactive decision support system which integrates the created interactive solving way, the decision process visualization and parallelization for multiple criteria optimization is developed. The solving strategies, when a multiple criteria optimization problem is solved interactively, using a computer cluster are developed and compared experimentally. The time required for a... [to full text]
Praktikoje dažnai tenka spręsti sudėtingus daugiakriterinius optimizavimo uždavinius, kai kriterijai būna prieštaringi, o galutinis apsisprendimas priklauso nuo sprendimų priėmėjo. Kai sprendimų priėmėjas dalyvauja sprendimo procese interaktyviai, tai jis gali koreguoti prioritetus ir siekiamus tikslus uždavinio sprendimo eigoje, kas įgalina spęsti uždavinius, turinčius daug kriterijų ir apribojimų. Be to, sprendimo priėmėjui svarbu gauti sprendinius iš visos Pareto aibės. Interaktyviam uždavinių sprendimui būtina sprendimų paramos sistema, kurios grafinė sąsaja yra pritaikyta sprendžiamam uždaviniui. Šio darbo tyrimų sritis yra interaktyvus daugiakriterinių optimizavimo uždavinių sprendimas bei sprendimų paramos sistemos. Disertacijoje nagrinėjant daugiakriterinio optimizavimo metodus, didesnis dėmesys skirtas metodams, užtikrinantiems gaunamų sprendinių tolygų pasiskirstymą Pareto aibėje bei interaktyviems metodams. Pasiūlytas ir ištirtas daugiakriterinių optimizavimo uždavinių sprendimo būdas, leidžiantis spręsti daugiakriterinius optimizavimo uždavinius interaktyviai ir užtikrinantis gaunamų sprendinių tolygų pasiskirstymą Pareto aibėje. Sukurta ir ištirta interaktyvi daugiakriterinių optimizavimo uždavinių sprendimų paramos sistemą, apjungianti pasiūlytą optimizavimo uždavinių sprendimo būdą, sprendimo proceso vizualizavimą ir jo lygiagretinimą. Taip pat pasiūlyta sprendimo strategija, pagal kurią sprendžiant daugiakriterinį optimizavimo uždavinį pasitelkiamas... [toliau žr. visą tekstą]
APA, Harvard, Vancouver, ISO, and other styles
38

Tahvili, Sahar [Verfasser]. "Multi-Criteria Optimization of System Integration Testing / Sahar Tahvili." München : GRIN Verlag, 2019. http://d-nb.info/1190430045/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Cui, Songye, and Songye Cui. "Multi-criteria optimization algorithms for high dose rate brachytherapy." Doctoral thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/37180.

Full text
Abstract:
L’objectif général de cette thèse est d’utiliser les connaissances en physique de la radiation, en programmation informatique et en équipement informatique à la haute pointe de la technologie pour améliorer les traitements du cancer. En particulier, l’élaboration d’un plan de traitement en radiothérapie peut être complexe et dépendant de l’utilisateur. Cette thèse a pour objectif de simplifier la planification de traitement actuelle en curiethérapie de la prostate à haut débit de dose (HDR). Ce projet a débuté à partir d’un algorithme de planification inverse largement utilisé, la planification de traitement inverse par recuit simulé (IPSA). Pour aboutir à un algorithme de planification inverse ultra-rapide et automatisé, trois algorithmes d’optimisation multicritères (MCO) ont été mis en oeuvre. Suite à la génération d’une banque de plans de traitement ayant divers compromis avec les algorithmes MCO, un plan de qualité a été automatiquement sélectionné. Dans la première étude, un algorithme MCO a été introduit pour explorer les frontières de Pareto en curiethérapie HDR. L’algorithme s’inspire de la fonctionnalité MCO intégrée au système Raystation (RaySearch Laboratories, Stockholm, Suède). Pour chaque cas, 300 plans de traitement ont été générés en série pour obtenir une approximation uniforme de la frontière de Pareto. Chaque plan optimal de Pareto a été calculé avec IPSA et chaque nouveau plan a été ajouté à la portion de la frontière de Pareto où la distance entre sa limite supérieure et sa limite inférieure était la plus grande. Dans une étude complémentaire, ou dans la seconde étude, un algorithme MCO basé sur la connaissance (kMCO) a été mis en oeuvre pour réduire le temps de calcul de l’algorithme MCO. Pour ce faire, deux stratégies ont été mises en oeuvre : une prédiction de l’espace des solutions cliniquement acceptables à partir de modèles de régression et d’un calcul parallèle des plans de traitement avec deux processeurs à six coeurs. En conséquence, une banque de plans de traitement de petite taille (14) a été générée et un plan a été sélectionné en tant que plan kMCO. L’efficacité de la planification et de la performance dosimétrique ont été comparées entre les plans approuvés par le médecin et les plans kMCO pour 236 cas. La troisième et dernière étude de cette thèse a été réalisée en coopération avec Cédric Bélanger. Un algorithme MCO (gMCO) basé sur l’utilisation d’un environnement de développement compatible avec les cartes graphiques a été mis en oeuvre pour accélérer davantage le calcul. De plus, un algorithme d’optimisation quasi-Newton a été implémenté pour remplacer le recuit simulé dans la première et la deuxième étude. De cette manière, un millier de plans de traitement avec divers compromis et équivalents à ceux générés par IPSA ont été calculés en parallèle. Parmi la banque de plans de traitement généré par l’agorithme gMCO, un plan a été sélectionné (plan gMCO). Le temps de planification et les résultats dosimétriques ont été comparés entre les plans approuvés par le médecin et les plans gMCO pour 457 cas. Une comparaison à grande échelle avec les plans approuvés par les radio-oncologues montre que notre dernier algorithme MCO (gMCO) peut améliorer l’efficacité de la planification du traitement (de quelques minutes à 9:4 s) ainsi que la qualité dosimétrique des plans de traitements (des plans passant de 92:6% à 99:8% selon les critères dosimétriques du groupe de traitement oncologique par radiation (RTOG)). Avec trois algorithmes MCO mis en oeuvre, cette thèse représente un effort soutenu pour développer un algorithme de planification inverse ultra-rapide, automatique et robuste en curiethérapie HDR.
L’objectif général de cette thèse est d’utiliser les connaissances en physique de la radiation, en programmation informatique et en équipement informatique à la haute pointe de la technologie pour améliorer les traitements du cancer. En particulier, l’élaboration d’un plan de traitement en radiothérapie peut être complexe et dépendant de l’utilisateur. Cette thèse a pour objectif de simplifier la planification de traitement actuelle en curiethérapie de la prostate à haut débit de dose (HDR). Ce projet a débuté à partir d’un algorithme de planification inverse largement utilisé, la planification de traitement inverse par recuit simulé (IPSA). Pour aboutir à un algorithme de planification inverse ultra-rapide et automatisé, trois algorithmes d’optimisation multicritères (MCO) ont été mis en oeuvre. Suite à la génération d’une banque de plans de traitement ayant divers compromis avec les algorithmes MCO, un plan de qualité a été automatiquement sélectionné. Dans la première étude, un algorithme MCO a été introduit pour explorer les frontières de Pareto en curiethérapie HDR. L’algorithme s’inspire de la fonctionnalité MCO intégrée au système Raystation (RaySearch Laboratories, Stockholm, Suède). Pour chaque cas, 300 plans de traitement ont été générés en série pour obtenir une approximation uniforme de la frontière de Pareto. Chaque plan optimal de Pareto a été calculé avec IPSA et chaque nouveau plan a été ajouté à la portion de la frontière de Pareto où la distance entre sa limite supérieure et sa limite inférieure était la plus grande. Dans une étude complémentaire, ou dans la seconde étude, un algorithme MCO basé sur la connaissance (kMCO) a été mis en oeuvre pour réduire le temps de calcul de l’algorithme MCO. Pour ce faire, deux stratégies ont été mises en oeuvre : une prédiction de l’espace des solutions cliniquement acceptables à partir de modèles de régression et d’un calcul parallèle des plans de traitement avec deux processeurs à six coeurs. En conséquence, une banque de plans de traitement de petite taille (14) a été générée et un plan a été sélectionné en tant que plan kMCO. L’efficacité de la planification et de la performance dosimétrique ont été comparées entre les plans approuvés par le médecin et les plans kMCO pour 236 cas. La troisième et dernière étude de cette thèse a été réalisée en coopération avec Cédric Bélanger. Un algorithme MCO (gMCO) basé sur l’utilisation d’un environnement de développement compatible avec les cartes graphiques a été mis en oeuvre pour accélérer davantage le calcul. De plus, un algorithme d’optimisation quasi-Newton a été implémenté pour remplacer le recuit simulé dans la première et la deuxième étude. De cette manière, un millier de plans de traitement avec divers compromis et équivalents à ceux générés par IPSA ont été calculés en parallèle. Parmi la banque de plans de traitement généré par l’agorithme gMCO, un plan a été sélectionné (plan gMCO). Le temps de planification et les résultats dosimétriques ont été comparés entre les plans approuvés par le médecin et les plans gMCO pour 457 cas. Une comparaison à grande échelle avec les plans approuvés par les radio-oncologues montre que notre dernier algorithme MCO (gMCO) peut améliorer l’efficacité de la planification du traitement (de quelques minutes à 9:4 s) ainsi que la qualité dosimétrique des plans de traitements (des plans passant de 92:6% à 99:8% selon les critères dosimétriques du groupe de traitement oncologique par radiation (RTOG)). Avec trois algorithmes MCO mis en oeuvre, cette thèse représente un effort soutenu pour développer un algorithme de planification inverse ultra-rapide, automatique et robuste en curiethérapie HDR.
The overall purpose of this thesis is to use the knowledge of radiation physics, computer programming and computing hardware to improve cancer treatments. In particular, designing a treatment plan in radiation therapy can be complex and user-dependent, and this thesis aims to simplify current treatment planning in high dose rate (HDR) prostate brachytherapy. This project was started from a widely used inverse planning algorithm, Inverse Planning Simulated Annealing (IPSA). In order to eventually lead to an ultra-fast and automatic inverse planning algorithm, three multi-criteria optimization (MCO) algorithms were implemented. With MCO algorithms, a desirable plan was selected after computing a set of treatment plans with various trade-offs. In the first study, an MCO algorithm was introduced to explore the Pareto surfaces in HDR brachytherapy. The algorithm was inspired by the MCO feature integrated in the Raystation system (RaySearch Laboratories, Stockholm, Sweden). For each case, 300 treatment plans were serially generated to obtain a uniform approximation of the Pareto surface. Each Pareto optimal plan was computed with IPSA, and each new plan was added to the Pareto surface portion where the distance between its upper boundary and its lower boundary was the largest. In a companion study, or the second study, a knowledge-based MCO (kMCO) algorithm was implemented to shorten the computation time of the MCO algorithm. To achieve this, two strategies were implemented: a prediction of clinical relevant solution space with previous knowledge, and a parallel computation of treatment plans with two six-core CPUs. As a result, a small size (14) plan dataset was created, and one plan was selected as the kMCO plan. The planning efficiency and the dosimetric performance were compared between the physician-approved plans and the kMCO plans for 236 cases. The third and final study of this thesis was conducted in cooperation with Cédric Bélanger. A graphics processing units (GPU) based MCO (gMCO) algorithm was implemented to further speed up the computation. Furthermore, a quasi-Newton optimization engine was implemented to replace simulated annealing in the first and the second study. In this way, one thousand IPSA equivalent treatment plans with various trade-offs were computed in parallel. One plan was selected as the gMCO plan from the calculated plan dataset. The planning time and the dosimetric results were compared between the physician-approved plans and the gMCO plans for 457 cases. A large-scale comparison against the physician-approved plans shows that our latest MCO algorithm (gMCO) can result in an improved treatment planning efficiency (from minutes to 9:4 s) as well as an improved treatment plan dosimetric quality (Radiation Therapy Oncology Group (RTOG) acceptance rate from 92.6% to 99.8%). With three implemented MCO algorithms, this thesis represents a sustained effort to develop an ultra-fast, automatic and robust inverse planning algorithm in HDR brachytherapy.
The overall purpose of this thesis is to use the knowledge of radiation physics, computer programming and computing hardware to improve cancer treatments. In particular, designing a treatment plan in radiation therapy can be complex and user-dependent, and this thesis aims to simplify current treatment planning in high dose rate (HDR) prostate brachytherapy. This project was started from a widely used inverse planning algorithm, Inverse Planning Simulated Annealing (IPSA). In order to eventually lead to an ultra-fast and automatic inverse planning algorithm, three multi-criteria optimization (MCO) algorithms were implemented. With MCO algorithms, a desirable plan was selected after computing a set of treatment plans with various trade-offs. In the first study, an MCO algorithm was introduced to explore the Pareto surfaces in HDR brachytherapy. The algorithm was inspired by the MCO feature integrated in the Raystation system (RaySearch Laboratories, Stockholm, Sweden). For each case, 300 treatment plans were serially generated to obtain a uniform approximation of the Pareto surface. Each Pareto optimal plan was computed with IPSA, and each new plan was added to the Pareto surface portion where the distance between its upper boundary and its lower boundary was the largest. In a companion study, or the second study, a knowledge-based MCO (kMCO) algorithm was implemented to shorten the computation time of the MCO algorithm. To achieve this, two strategies were implemented: a prediction of clinical relevant solution space with previous knowledge, and a parallel computation of treatment plans with two six-core CPUs. As a result, a small size (14) plan dataset was created, and one plan was selected as the kMCO plan. The planning efficiency and the dosimetric performance were compared between the physician-approved plans and the kMCO plans for 236 cases. The third and final study of this thesis was conducted in cooperation with Cédric Bélanger. A graphics processing units (GPU) based MCO (gMCO) algorithm was implemented to further speed up the computation. Furthermore, a quasi-Newton optimization engine was implemented to replace simulated annealing in the first and the second study. In this way, one thousand IPSA equivalent treatment plans with various trade-offs were computed in parallel. One plan was selected as the gMCO plan from the calculated plan dataset. The planning time and the dosimetric results were compared between the physician-approved plans and the gMCO plans for 457 cases. A large-scale comparison against the physician-approved plans shows that our latest MCO algorithm (gMCO) can result in an improved treatment planning efficiency (from minutes to 9:4 s) as well as an improved treatment plan dosimetric quality (Radiation Therapy Oncology Group (RTOG) acceptance rate from 92.6% to 99.8%). With three implemented MCO algorithms, this thesis represents a sustained effort to develop an ultra-fast, automatic and robust inverse planning algorithm in HDR brachytherapy.
APA, Harvard, Vancouver, ISO, and other styles
40

Cabrera, Rios Mauricio. "MULTIPLE CRITERIA OPTIMIZATION STUDIES IN REACTIVE IN-MOLD COATING." The Ohio State University, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=osu1022105843.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Mouffe, Mélodie. "Multilevel optimization in infinity norm and associated stopping criteria." Thesis, Toulouse, INPT, 2009. http://www.theses.fr/2009INPT011G/document.

Full text
Abstract:
Cette thèse se concentre sur l'étude d'un algorithme multi niveaux de régions de confiance en norme infinie, conçu pour la résolution de problèmes d'optimisation non linéaires de grande taille pouvant être soumis a des contraintes de bornes. L'étude est réalisée tant sur le plan théorique que numérique. L'algorithme RMTR8 que nous étudions ici a été élaboré a partir de l'algorithme présente par Gratton, Sartenaer et Toint (2008b), et modifie d'abord en remplaçant l'usage de la norme Euclidienne par une norme infinie, et ensuite en l'adaptant a la résolution de problèmes de minimisation soumis a des contraintes de bornes. Dans un premier temps, les spécificités du nouvel algorithme sont exposées et discutées. De plus, l'algorithme est démontré globalement convergent au sens de Conn, Gould et Toint (2000), c'est-a-dire convergent vers un minimum local au départ de tout point admissible. D'autre part, il est démontre que la propriété d'identification des contraintes actives des méthodes de régions de confiance basées sur l'utilisation d'un point de Cauchy peut être étendue a tout solveur interne respectant une décroissance suffisante. En conséquence, cette propriété d'identification est aussi respectée par une variante particulière du nouvel algorithme. Par la suite, nous étudions différents critères d'arrêt pour les algorithmes d'optimisation avec contraintes de bornes afin de déterminer le sens et les avantages de chacun, et ce pour pouvoir choisir aisément celui qui convient le mieux a certaines situations. En particulier, les critères d'arrêts sont analyses en termes d'erreur inverse (backward erreur), tant au sens classique du terme (avec l'usage d'une norme produit) que du point de vue de l'optimisation multicritères. Enfin, un algorithme pratique est mis en place, utilisant en particulier une technique similaire au lissage de Gauss-Seidel comme solveur interne. Des expérimentations numériques sont réalisées sur une version FORTRAN 95 de l'algorithme. Elles permettent d'une part de définir un panel de paramètres efficaces par défaut et, d'autre part, de comparer le nouvel algorithme a d'autres algorithmes classiques d'optimisation, comme la technique de raffinement de maillage ou la méthode du gradient conjugue, sur des problèmes avec et sans contraintes de bornes. Ces comparaisons numériques semblent donner l'avantage à l'algorithme multi niveaux, en particulier sur les cas peu non-linéaires, comportement attendu de la part d'un algorithme inspire des techniques multi grilles. En conclusion, l'algorithme de région de confiance multi niveaux présente dans cette thèse est une amélioration du précédent algorithme de cette classe d'une part par l'usage de la norme infinie et d'autre part grâce a son traitement de possibles contraintes de bornes. Il est analyse tant sur le plan de la convergence que de son comportement vis-à-vis des bornes, ou encore de la définition de son critère d'arrêt. Il montre en outre un comportement numérique prometteur
This thesis concerns the study of a multilevel trust-region algorithm in infinity norm, designed for the solution of nonlinear optimization problems of high size, possibly submitted to bound constraints. The study looks at both theoretical and numerical sides. The multilevel algorithm RMTR8 that we study has been developed on the basis of the algorithm created by Gratton, Sartenaer and Toint (2008b), which was modified first by replacing the use of the Euclidean norm by the infinity norm and also by adapting it to solve bound-constrained problems. In a first part, the main features of the new algorithm are exposed and discussed. The algorithm is then proved globally convergent in the sense of Conn, Gould and Toint (2000), which means that it converges to a local minimum when starting from any feasible point. Moreover, it is shown that the active constraints identification property of the trust-region methods based on the use of a Cauchy step can be extended to any internal solver that satisfies a sufficient decrease property. As a consequence, this identification property also holds for a specific variant of our new algorithm. Later, we study several stopping criteria for nonlinear bound-constrained algorithms, in order to determine their meaning and their advantages from specific points of view, and such that we can choose easily the one that suits best specific situations. In particular, the stopping criteria are examined in terms of backward error analysis, which has to be understood both in the usual meaning (using a product norm) and in a multicriteria optimization framework. In the end, a practical algorithm is set on, that uses a Gauss-Seidel-like smoothing technique as an internal solver. Numerical tests are run on a FORTRAN 95 version of the algorithm in order to define a set of efficient default parameters for our method, as well as to compare the algorithm with other classical algorithms like the mesh refinement technique and the conjugate gradient method, on both unconstrained and bound-constrained problems. These comparisons seem to give the advantage to the designed multilevel algorithm, particularly on nearly quadratic problems, which is the behavior expected from an algorithm inspired by multigrid techniques. In conclusion, the multilevel trust-region algorithm presented in this thesis is an improvement of the previous algorithm of this kind because of the use of the infinity norm as well as because of its handling of bound constraints. Its convergence, its behavior concerning the bounds and the definition of its stopping criteria are studied. Moreover, it shows a promising numerical behavior
APA, Harvard, Vancouver, ISO, and other styles
42

Reynolds, Joel Howard. "Multi-criteria assessment of ecological process models using pareto optimization /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/6377.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Guessab, Benaceur. "Contribution au calcul en plasticité des structures à barres." Grenoble 1, 1992. http://www.theses.fr/1992GRE10201.

Full text
Abstract:
Le calcul en plasticite des structures a barres est base sur la schematisation du milieu continu tridimensionnel en un milieu unidimensionnel, dit milieu generalise. Dans ce dernier milieu, l'etat de contraintes est defini par les six sollicitations (ou contraintes generalisees) qui agissent sur une section droite de la barre. Le comportement plastique du milieu generalise est defini par la donnee d'un critere de plasticite fonction des contraintes generalisees. Ce critere est obtenu classiquement a partir de distributions de contraintes plastiquement admissibles sur une section droite de la barre. Les charges limites calculees dans le milieu unidimensionnel en utilisant les criteres classiques ne peuvent etre situees relativement a celles obtenues dans le formalisme du milieu tridimensionnel. Une nouvelle methode pour la determination des criteres de plasticite en variables generalisees est proposee dans ce travail. Cette methode consiste en l'etude des chargements supportables d'un troncon de barre soumis a des conditions aux limites cinematiques particulieres. Le choix de telles conditions a pour but d'autoriser un retour licite en vitesse de deplacement du milieu unidimensionnel vers le milieu tridimensionnel. De ce fait le sens des charges limites obtenues dans le milieu generalise se trouve-t-il bien precise. La methode proposee est appliquee dans le cas de barres a section rectangulaire et en double te. Les criteres qui en decoulent sont compares aux criteres classiques ainsi que les charges limites de structures, determinees dans le cadre propose et le cadre classique
APA, Harvard, Vancouver, ISO, and other styles
44

Конрад, Тетяна Ігорівна, Татьяна Игоревна Конрад, and Tetiana Igorivna Konrad. "Математична модель багатокритеріального розподілу транспортних потоків для автоматизованих систем мультимодальних транспортних мереж." Thesis, Національний авіаційний університет, 2021. https://er.nau.edu.ua/handle/NAU/48934.

Full text
Abstract:
Уперше розроблено інфологічну модель факторів, показників та критеріїв оптимальності маршрутуперевезеннявантажів в мультимодальнихтранспортних мережах, яка базується на методах евристичного аналізу предметної галузі і відрізняється формалізацією задачі визначення оптимального маршруту у багатокритеріальній формі ієрархії вкладених груп критеріїв, що забезпечує підвищення адекватності математичних моделей розв’язку оптимізаційних задач транспортного типу. Удосконалено математичну модель багатокритеріального розподілу транспортних потоків в мультимодальних транспортних мережах, яка базується і відрізняється багатокритеріальним вибором оптимального маршруту на графах з використанням вкладених згорток за нелінійною схемою компромісів, що забезпечує підвищення ефективності управління транспортними потоками за компромісним відношенням вектору показників ефективності до вартості. Удосконалено архітектуру програмної системи підтримки прийняття рішень багатокритеріального розподілу транспортних потоків в мультимодальних транспортних мережах, яка базується на використанні в розрахунковому блоці структурних елементів, що забезпечують отримання обумовлених рішень про оптимальний маршрут перевезень, як результат синергетичногооб’єднаннярозробленоїінфологічноїмоделіфакторів та показників і удосконаленої математичної моделі оптимального розподілу транспортних потоків.
Впервые разработана инфологическая модель факторов, показателей и критериев оптимальности маршрута перевозки грузов в мультимодальных транспортных сетях, которая базируется на методах эвристического анализа предметной области и отличается формализацией задачи определения оптимального маршрута в многокритериальной форме иерархии вложенных групп критериев, что обеспечивает повышение адекватности математических моделей решения оптимизационных задач транспортного типа.Усовершенствована математическая модель многокритериального распределения транспортных потоков в мультимодальных транспортных сетях, которая базируется и отличается многокритериальным выбором оптимального маршрута на графах с использованиемвложенных сверток по нелинейной схеме компромиссов, что обеспечивает повышение эффективности управления транспортными потоками по компромиссному отношением вектора показателей эффективности к стоимости. Усовершенствована архитектура программной системы поддержки принятия решений многокритериального распределения транспортных потоков вмультимодальных транспортных сетях, основанная на использовании в расчетном блоке структурных элементов, обеспечивающих получение обусловленных решений про оптимальный маршрут перевозок, как результат синергетического объединения разработанной инфологической модели факторов и показателей и усовершенствованной математической модели оптимального распределения транспортных потоков.
For the first time, an infological model of factors, indicators, and criteria of optimal route of cargo transportation in multimodal transport networks was developed, which is based on methods of a heuristic analysis of the subject area and differs in formalizing the problem of determining the optimal route in a multicriteria hierarchy of nested groups of criteria which improves an increase in the adequacy of mathematical models for solving transport-type optimization problems.The mathematical model of multicriteria distribution of traffic flows in multimodal transport networks is improved, which is based on and differs in multicriteria choice of the optimal route on graphs using nested convolutions according to the nonlinear scheme of compromises which improves an increase in the efficiency of traffic flow management by a compromise ratio of the vector of efficiency to cost. Improved the architecture of the software system for decision support of the multicriteria distribution of traffic flows in multimodal transport networks, which is based on the use in the calculation block of structural elements that provide conditional decisions on the optimal route of transportation as a result of synergistic integration and an improved mathematical model of the optimal distribution of traffic flows. The architecture of the software system differs in the improvement of the structure of the calculation unit due to the formalization of the transport problem in a multicriteria form and the choice of the optimal route for the integrated efficiency of graph structures. The use of advanced architecture allows increasing the efficiency of traffic flow management in terms of efficiency and reliability of the original solutions.The mathematical support of the computational algorithm is based on the method of multi-criteria selection of the optimal route for the transportation of goods and the method of optimal distribution of limited resources.
APA, Harvard, Vancouver, ISO, and other styles
45

Villanueva, Jaquez Delia. "Multiple objective optimization of performance based logistics." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2009. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Villarreal-Marroquin, Maria G. "A Metamodel based Multiple Criteria Optimization via Simulation Method for Polymer Processing." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1356518813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Medeiros, Anderson Vinicius de. "Modelagem de sistemas dinamicos não lineares utilizando sistemas fuzzy, algoritmos geneticos e funções de base ortonormal." [s.n.], 2006. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261859.

Full text
Abstract:
Orientadores: Wagner Caradori do Amaral, Ricardo Jose Gabrielli Barreto Campello
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-06T08:36:39Z (GMT). No. of bitstreams: 1 Medeiros_AndersonViniciusde_M.pdf: 896535 bytes, checksum: 48d0d75d38fcbbd0f47f7c49823806f1 (MD5) Previous issue date: 2006
Resumo: Esta dissertação apresenta uma metodologia para a geração e otimização de modelos fuzzy Takagi-Sugeno (TS) com Funções de Base Ortonormal (FBO) para sistemas dinâmicos não lineares utilizando um algoritmo genético. Funções de base ortonormal têm sido utilizadas por proporcionarem aos modelos propriedades como ausência de recursão da saída e possibilidade de se alcançar uma razoável capacidade de representação com poucos parâmetros. Modelos fuzzy TS agregam a essas propriedades as características de interpretabilidade e facilidade de representação do conhecimento. Enfim, os algoritmos genéticos se apresentam como um método bem estabelecido na literatura na tarefa de sintonia de parâmetros de modelos fuzzy TS. Diante disso, desenvolveu-se um algoritmo genético para a otimização de duas arquiteturas, o modelo fuzzy TS FBO e sua extensão, o modelo fuzzy TS FBO Generalizado. Foram analisados modelos locais lineares e não lineares nos conseqüentes das regras fuzzy, assim como a diferença entre a estimação local e a global (utilizando o estimador de mínimos quadrados) dos parâmetros desses modelos locais. No algoritmo genético, cada arquitetura contou com uma representação cromossômica específica. Elaborou-se para ambas uma função de fitness baseada no critério de Akaike. Em relação aos operadores de reprodução, no operador de crossover aritmético foi introduzida uma alteração para a manutenção da diversidade da população e no operador de mutação gaussiana adotou-se uma distribuição variável ao longo das gerações e diferenciada para cada gene. Introduziu-se ainda um método de simplificação de soluções através de medidas de similaridade para a primeira arquitetura citada. A metodologia foi avaliada na tarefa de modelagem de dois sistemas dinâmicos não lineares: um processo de polimerização e um levitador magnético
Abstract: This work introduces a methodology for the generation and optimization of Takagi-Sugeno (TS) fuzzy models with Orthonormal Basis Functions (OBF) for nonlinear dynamic systems based on a genetic algorithm. Orthonormal basis functions have been used because they provide models with properties like absence of output feedback and the possibility to reach a reasonable approximation capability with just a few parameters. TS fuzzy models aggregate to these properties the characteristics of interpretability and easiness to knowledge representation in a linguistic manner. Genetic algorithms appear as a well-established method for tuning parameters of TS fuzzy models. In this context, it was developed a genetic algorithm for the optimization of two architectures, the OBF TS fuzzy model and its extension, the Generalized OBF TS fuzzy model. Local linear and nonlinear models in the consequent of the fuzzy rules were analyzed, as well as the difference between local and global estimation (using least squares estimation) of the parameters of these local models. Each architecture had a specific chromosome representation in the genetic algorithm. It was developed a fitness function based on the Akaike information criterion. With respect to the genetic operators, the arithmetic crossover was modified in order to maintain the population diversity and the Gaussian mutation had its distribution varied along the generations and differentiated for each gene. Besides, it was used, in the first architecture presented, a method for simplifying the solutions by using similarity measures. The whole methodology was evaluated in modeling two nonlinear dynamic systems, a polymerization process and a magnetic levitator
Mestrado
Automação
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
48

Cirino, Rafael Bernardo Zanetti. "Abordagens de solução para o problema de alocação de aulas a salas." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-16112016-142336/.

Full text
Abstract:
Esta Dissertação aborda o Problema de Alocação de Aulas a Salas (PAAS), também conhecido como Problema de Alocação de Salas (PAS). As instituições de ensino superior, no começo de seus calendários letivos, resolvem um PAAS ao determinar os espaços a serem utilizados para as atividades didáticas. Porém, em muitas destas instituições o PAAS é ainda resolvido manualmente, gerando altas cargas de trabalho para os responsáveis. Neste trabalho, o Instituto de Ciências Matemáticas e de Computação (ICMC) da Universidade de São Paulo (USP) foi tomado como caso de estudo para o PAAS. Um modelo de programação matemática inteiro é proposto e abordado por técnicas de resolução exata, metaheurísticas mono-objetivo e uma abordagem multi-objetivo. Uma estrutura de vizinhança proposta obteve resultados comparáveis à da metodologia exata, para um tempo fixo de execução. Demonstra-se que, a abordagem multi-objetivo é uma possibilidade de contornar algumas dificuldades clássicas do problema, como incertezas sobre a escolha dos pesos das métricas. Os métodos de solução propostos para o problema fornecem, aos responsáveis, bons instrumentos de auxílio à tomada de decisão para o PAAS.
This Dissertation addresses the Classroom Assignment Problem (CAP). All Higher Education Institutes, at the schoolyear\'s begin, faces a CAP to define where the classes will be taught. However, many of those still solves this problem manually, demanding high efforts from the responsible staff. In this study, the Universidade de São Paulo\'s (USP) Instituto de Ciências Matemáticas e de Computação (ICMC) was tackled as study case for the CAP. An Integer Programming Model is proposed and tackled by exact methods, meta-heuristics and a multi-objective approach. A novel neighborhood operator is proposed for the local search and obtains good results, even comparable to the exact method. The multi-objective approach is shown to overcome some of the classical adversity of the mono-objective approach, e.g., choosing weights to quality metric. Those CAP\'s proposed solution methods, gives the responsible staff a good decision making support.
APA, Harvard, Vancouver, ISO, and other styles
49

Gupta, Rikin. "Incorporating Flight Dynamics and Control Criteria in Aircraft Design Optimization." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/104967.

Full text
Abstract:
The NASA Performance Adaptive Aeroelastic Wing (PAAW) project goals include significant reductions in fuel burn, emissions, and noise via efficient aeroelastic design and improvements in propulsion systems. As modern transport airplane designs become increasingly lightweight and incorporate high aspect-ratio wings, aeroservoelastic effects gain prominence in modeling and design considerations. As a result, the influence of the flight dynamics and controls on the optimal structural and aerodynamic design needs to be captured in the design process. There is an increasing interest in more integrated aircraft multidisciplinary design optimization (MDAO) processes that can bring flight control design into the early stage of an aircraft design cycle. So, in this thesis different flight dynamics modeling methodologies are presented that can be integrated within the MDAO framework. MDAO studies are conducted to maximize the controllability and observability of a UAV type aircraft using curvilinear SpaRibs and straight spars and ribs as the internal structural layout. The impulse residues and controllability Gramians are used as surrogates for the control objectives in the MDAO to maximize the controllability and observability of the aircraft. The optimal control designs are compared with those obtained using weight minimization as the design objective. It is found that using the aforementioned control objectives, the resulting aircraft design is more controllable and can be used to expand the flight envelope by up to 50% as compared to the weight minimized design.
Doctor of Philosophy
Over the last two decades, several attempts have been made towards multidisciplinary design analysis and optimization (MDAO) of flexible wings by integrating flight control laws in the wing design so that the aircraft will have sufficient control authority across different flying conditions. However, most of the studies have been restricted to the wing design only using a predefined control architecture approach, which would be very difficult to implement at the conceptual design stage. There is a need for an approach that would be faster and more practical. Including control surface and control law designs at the conceptual design stage is becoming increasingly important, due to the complexity of both the aircraft control laws and that of the actuation and sensing, and the enhanced wing flexibility of future transport aircraft. A key question that arises is, can one design an aircraft that is more controllable and observable? So, in this thesis, a more fundamental approach, in which the internal structural layout of the aircraft is optimized to design an aircraft that is more controllable, is presented and implemented. The approach uses the fundamentals of linear systems theory for maximizing the controllability and observability of the aircraft using an MDAO framework.
APA, Harvard, Vancouver, ISO, and other styles
50

Sánchez, Corrales Helem Sabina. "Multi-objective optimization and multicriteria design of PI /PID controllers." Doctoral thesis, Universitat Autònoma de Barcelona, 2016. http://hdl.handle.net/10803/393990.

Full text
Abstract:
Hoy en día, los controladores proporcionales integrales y proporcionales integrales derivativos son los algoritmos de control más utilizado en la industria. Por otra parte, los controladores fraccionarios han recibido atención recientemente, por parte de la comunidad científica y desde el punto de vista industrial. Debido a esto, en esta tesis algunos de los escenarios implican la sintonización de estos controladores mediante el procedimiento de diseño mediante la optimización multi-objetivo. Este procedimiento se centra en proporcionar un equilibrio razonable entre los objetivos en conflicto y brinda al diseñador la posibilidad de apreciar la comparación de los objetivos de diseño. Esta tesis se divide en tres partes. La primera parte, presenta los fundamentos del sistema de control y discusión de los diferentes compromisos: entre los modos de operación servo / regulación y del rendimiento / robustez. Por otro lado, se ha proporcionado un marco conceptual acerca de la optimización multi-objetivo. La segunda parte, introduce la solución de Nash como una técnica de selección multi-criterio, para seleccionar un punto del frente de Pareto, que represente el mejor compromiso entre los objetivos de diseño. Esta solución es una selección semi-automática escogida en la aproximación del frente de Pareto y ofrece un buen compromiso entre los objetivos de diseño. Luego, se presenta el Multi-stage approach para el proceso de optimización multi-objetivo. Este enfoque implica dos algoritmos: un algoritmo determinista y algoritmo evolutivo. En el cual ambos algoritmos se complementen entre sí a pesar de sus desventajas y mejoran los resultados de la optimización en términos de convergencia y precisión. Además, se introduce el objetivo basado en la fiabilidad, en la descripción del problema multi-objetivo, este se utiliza para medir la degradación del rendimiento. Vale la pena mencionar que, debido a la existencia de incertidumbres en el diseño y fabricación, teniendo este objetivo de diseño le dará otra perspectiva al diseñador en el mundo real. Con el fin de validar el método, dos casos de estudios se ha considerado, el problema de control de la caldera (The Boiler Control Benchmark) para la sintonización de controladores y como segundo caso, una pila Peltier nolineal. Por último, la tercera parte de esta tesis, presentan las contribuciones a la sintonización de controladores. En primer lugar, se propone un conjunto de reglas de sintonía basado en la solución de Nash para un controlador proporcional-integral, en donde la robustez / rendimiento han sido considerados. Por otra parte, como un segundo caso se presenta las reglas de sintonía para un controlador proporcional-integral-derivativo, donde se han considerado el compromiso de robustez/rendimiento y los modos de operación servo / regulación. Además, se proponen reglas de sintonía para el controlador proporcional-integral-derivativo-fraccional-orden implementado el Multi-stage approach para la optimización multi-objetivo.
Nowadays, the proportional integral and proportional integral derivatives are the most used control algorithm in the industry. Moreover, the fractional controllers have received attention recently for both, the research community and from the industrial point of view. Owing to this, in this thesis some of the scenarios involve the tuning of these controllers by using the Multiobjective Optimization Design procedure. This procedure focuses on providing reasonable trade-off among the conflictive objectives and brings the designer the possibility to appreciate the comparison of the design objectives. This thesis is divided in three parts. The first part, presented the fundamentals of the control system showing and discussing the different trade-offs between performance/robustness and servo/regulation operation modes. On the other hand a background on multi-objective optimization has been provided. The second part, introduces the Nash solution as a multi-criteria decision making technique, to select a point from the Pareto front that represent the best compromise among the design objective. This solution provides a semi-automatic selection from the Pareto front approximation and offers a good trade-off between the goal objectives. Hereafter, a Multi-stage approach for the multi-objective optimization process is presented. This approach involves two algorithms: a deterministic and evolutionary algorithm. In which both algorithms complement each other in despite of their drawbacks and improve the results of the overall optimization in terms of convergence and accuracy. Further, the introduction of reliability based objective into the multi-objective problem is carried out, to measure the performance degradation. It is worthwhile to mention that, due to the existence of uncertainties in real-world designing and manufacturing having this design objective will give another perspective to the designer. In order to validate the approach, two different case studies has been considered, the Boiler control problem for controller tuning and as second case, a non-linear Peltier Cell. Finally, the third part of this thesis, the contributions on controller tuning have been presented. First, a set of tuning rules based on the NS for a proportional-integral (PI) controller have been devised, where the robustness/performance trade-off have been considered. Moreover, as a second case it is presented a tuning for proportional-integral-derivative controller where the trade-off of the performance/robustness and servo/regulation operation mode has been considered. Moreover, the fractional-order-proportional-integral-derivative controller is tuned by using the Multi-stage approach for the MOO process.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography