Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Optimization variable.

Дисертації з теми "Optimization variable"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Optimization variable".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Pelamatti, Julien. "Mixed-variable Bayesian optimization : application to aerospace system design." Thesis, Lille 1, 2020. http://www.theses.fr/2020LIL1I003.

Повний текст джерела
Анотація:
Dans le cadre de la conception de systèmes complexes, tels que les aéronefs et les lanceurs, la présence de fonctions d'objectifs et/ou de contraintes à forte intensité de calcul (e.g., modèles d'éléments finis) couplée à la dépendance de choix de conception technologique discrets et non ordonnés entraîne des problèmes d'optimisation difficiles. De plus, une partie de ces choix technologiques est associée à un certain nombre de variables de conception continues et discrètes spécifiques qui ne doivent être prises en considération que si des choix technologiques spécifiques sont faits. Par conséquent, le problème d'optimisation qui doit être résolu afin de déterminer la conception optimale du système présente un espace de recherche et un domaine de faisabilité variant de façon dynamique. Les algorithmes existants qui permettent de résoudre ce type particulier de problèmes ont tendance à exiger une grande quantité d'évaluations de fonctions afin de converger vers l'optimum réalisable, et sont donc inadéquats lorsqu'il s'agit de résoudre les problèmes à forte intensité de calcul. Pour cette raison, cette thèse explore la possibilité d'effectuer une optimisation de l'espace de conception contraint à variables mixtes et de taille variable en s'appuyant sur des méthodes d’optimisation à base de modèles de substitution créés à l'aide de processus Gaussiens, également connue sous le nom d'optimisation Bayésienne. Plus spécifiquement, 3 axes principaux sont discutés. Premièrement, la modélisation de substitution par processus gaussien de fonctions mixtes continues/discrètes et les défis qui y sont associés sont discutés en détail. Un formalisme unificateur est proposé afin de faciliter la description et la comparaison entre les noyaux existants permettant d'adapter les processus gaussiens à la présence de variables discrètes non ordonnées. De plus, les performances réelles de modélisation de ces différents noyaux sont testées et comparées sur un ensemble de benchmarks analytiques et de conception ayant des caractéristiques et des paramétrages différents. Dans la deuxième partie de la thèse, la possibilité d'étendre la modélisation de substitution mixte continue/discrète à un contexte d'optimisation Bayésienne est discutée. La faisabilité théorique de cette extension en termes de modélisation de la fonction objectif/contrainte ainsi que de définition et d'optimisation de la fonction d'acquisition est démontrée. Différentes alternatives possibles sont considérées et décrites. Enfin, la performance de l'algorithme d'optimisation proposé, avec diverses paramétrisations des noyaux et différentes initialisations, est testée sur un certain nombre de cas-test analytiques et de conception et est comparée aux algorithmes de référence.Dans la dernière partie de ce manuscrit, deux approches permettant d'adapter les algorithmes d'optimisation bayésienne mixte continue/discrète discutés précédemment afin de résoudre des problèmes caractérisés par un espace de conception variant dynamiquement au cours de l’optimisation sont proposées. La première adaptation est basée sur l'optimisation parallèle de plusieurs sous-problèmes couplée à une allocation de budget de calcul basée sur l'information fournie par les modèles de substitution. La seconde adaptation, au contraire, est basée sur la définition d'un noyau permettant de calculer la covariance entre des échantillons appartenant à des espaces de recherche partiellement différents en fonction du regroupement hiérarchique des variables dimensionnelles. Enfin, les deux alternatives sont testées et comparées sur un ensemble de cas-test analytiques et de conception.Globalement, il est démontré que les méthodes d'optimisation proposées permettent de converger vers les optimums des différents types de problèmes considérablement plus rapidement par rapport aux méthodes existantes. Elles représentent donc un outil prometteur pour la conception de systèmes complexes
Within the framework of complex system design, such as aircraft and launch vehicles, the presence of computationallyintensive objective and/or constraint functions (e.g., finite element models and multidisciplinary analyses)coupled with the dependence on discrete and unordered technological design choices results in challenging optimizationproblems. Furthermore, part of these technological choices is associated to a number of specific continuous anddiscrete design variables which must be taken into consideration only if specific technological and/or architecturalchoices are made. As a result, the optimization problem which must be solved in order to determine the optimalsystem design presents a dynamically varying search space and feasibility domain.The few existing algorithms which allow solving this particular type of problems tend to require a large amountof function evaluations in order to converge to the feasible optimum, and result therefore inadequate when dealingwith the computationally intensive problems which can often be encountered within the design of complex systems.For this reason, this thesis explores the possibility of performing constrained mixed-variable and variable-size designspace optimization by relying on surrogate model-based design optimization performed with the help of Gaussianprocesses, also known as Bayesian optimization. More specifically, 3 main axes are discussed. First, the Gaussianprocess surrogate modeling of mixed continuous/discrete functions and the associated challenges are extensivelydiscussed. A unifying formalism is proposed in order to facilitate the description and comparison between theexisting kernels allowing to adapt Gaussian processes to the presence of discrete unordered variables. Furthermore,the actual modeling performances of these various kernels are tested and compared on a set of analytical and designrelated benchmarks with different characteristics and parameterizations.In the second part of the thesis, the possibility of extending the mixed continuous/discrete surrogate modeling toa context of Bayesian optimization is discussed. The theoretical feasibility of said extension in terms of objective/-constraint function modeling as well as acquisition function definition and optimization is shown. Different possiblealternatives are considered and described. Finally, the performance of the proposed optimization algorithm, withvarious kernels parameterizations and different initializations, is tested on a number of analytical and design relatedtest-cases and compared to reference algorithms.In the last part of this manuscript, two alternative ways of adapting the previously discussed mixed continuous/discrete Bayesian optimization algorithms in order to solve variable-size design space problems (i.e., problemscharacterized by a dynamically varying design space) are proposed. The first adaptation is based on the paralleloptimization of several sub-problems coupled with a computational budget allocation based on the informationprovided by the surrogate models. The second adaptation, instead, is based on the definition of a kernel allowingto compute the covariance between samples belonging to partially different search spaces based on the hierarchicalgrouping of design variables. Finally, the two alternatives are tested and compared on a set of analytical and designrelated benchmarks.Overall, it is shown that the proposed optimization methods allow to converge to the various constrained problemoptimum neighborhoods considerably faster when compared to the reference methods, thus representing apromising tool for the design of complex systems
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zebian, Hussam. "Multi-variable optimization of pressurized oxy-coal combustion." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/67808.

Повний текст джерела
Анотація:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 81-82).
Simultaneous multi-variable gradient-based optimization with multi-start is performed on a 300 MWe wet-recycling pressurized oxy-coal combustion process with carbon capture and sequestration. The model accounts for realistic component behavior such as heat losses, steam leaks, pressure drops, cycle irreversibilities, and other technological and economical considerations. The optimization study involves 16 variables, three of which are integer valued, and 10 constraints with the objective of maximizing thermal efficiency. The solution procedure follows active inequality constraints which are identified by thermodynamic-based analysis to facilitate convergence. Results of the multi-variable optimization are compared to a pressure sensitivity analysis similar to those performed in literature; the basecase of both assessments performed here is a favorable solution found in literature. Significant cycle performance improvements are obtained compared to this literature design at a much lower operating pressure and with moderate changes in the other operating variables. The effect of the variables on the cycle performance and on the constraints are analyzed and explained to obtain increased understanding of the actual behavior of the system. This study reflects the importance of simultaneous multi-variable optimization in revealing the system characteristics and uncovering the favorable solutions with higher efficiency than the atmospheric operation or those obtained by single variable sensitivity analysis.
by Hussam Zebian.
S.M.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ndiaye, Eugene. "Safe optimization algorithms for variable selection and hyperparameter tuning." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT004/document.

Повний текст джерела
Анотація:
Le traitement massif et automatique des données requiert le développement de techniques de filtration des informations les plus importantes. Parmi ces méthodes, celles présentant des structures parcimonieuses se sont révélées idoines pour améliorer l’efficacité statistique et computationnelle des estimateurs, dans un contexte de grandes dimensions. Elles s’expriment souvent comme solution de la minimisation du risque empirique régularisé s’écrivant comme une somme d’un terme lisse qui mesure la qualité de l’ajustement aux données, et d’un terme non lisse qui pénalise les solutions complexes. Cependant, une telle manière d’inclure des informations a priori, introduit de nombreuses difficultés numériques pour résoudre le problème d’optimisation sous-jacent et pour calibrer le niveau de régularisation. Ces problématiques ont été au coeur des questions que nous avons abordées dans cette thèse.Une technique récente, appelée «Screening Rules», propose d’ignorer certaines variables pendant le processus d’optimisation en tirant bénéfice de la parcimonie attendue des solutions. Ces règles d’élimination sont dites sûres lorsqu’elles garantissent de ne pas rejeter les variables à tort. Nous proposons un cadre unifié pour identifier les structures importantes dans ces problèmes d’optimisation convexes et nous introduisons les règles «Gap Safe Screening Rules». Elles permettent d’obtenir des gains considérables en temps de calcul grâce à la réduction de la dimension induite par cette méthode. De plus, elles s’incorporent facilement aux algorithmes itératifs et s’appliquent à un plus grand nombre de problèmes que les méthodes précédentes.Pour trouver un bon compromis entre minimisation du risque et introduction d’un biais d’apprentissage, les algorithmes d’homotopie offrent la possibilité de tracer la courbe des solutions en fonction du paramètre de régularisation. Toutefois, ils présentent des instabilités numériques dues à plusieurs inversions de matrice, et sont souvent coûteux en grande dimension. Aussi, ils ont des complexités exponentielles en la dimension du modèle dans des cas défavorables. En autorisant des solutions approchées, une approximation de la courbe des solutions permet de contourner les inconvénients susmentionnés. Nous revisitons les techniques d’approximation des chemins de régularisation pour une tolérance prédéfinie, et nous analysons leur complexité en fonction de la régularité des fonctions de perte en jeu. Il s’ensuit une proposition d’algorithmes optimaux ainsi que diverses stratégies d’exploration de l’espace des paramètres. Ceci permet de proposer une méthode de calibration de la régularisation avec une garantie de convergence globale pour la minimisation du risque empirique sur les données de validation.Le Lasso, un des estimateurs parcimonieux les plus célèbres et les plus étudiés, repose sur une théorie statistique qui suggère de choisir la régularisation en fonction de la variance des observations. Ceci est difficilement utilisable en pratique car, la variance du modèle est une quantité souvent inconnue. Dans de tels cas, il est possible d’optimiser conjointement les coefficients de régression et le niveau de bruit. Ces estimations concomitantes, apparues dans la littérature sous les noms de Scaled Lasso, Square-Root Lasso, fournissent des résultats théoriques aussi satisfaisants que celui du Lasso tout en étant indépendant de la variance réelle. Bien que présentant des avancées théoriques et pratiques importantes, ces méthodes sont aussi numériquement instables et les algorithmes actuellement disponibles sont coûteux en temps de calcul. Nous illustrons ces difficultés et nous proposons à la fois des modifications basées sur des techniques de lissage pour accroitre la stabilité numérique de ces estimateurs, ainsi qu’un algorithme plus efficace pour les obtenir
Massive and automatic data processing requires the development of techniques able to filter the most important information. Among these methods, those with sparse structures have been shown to improve the statistical and computational efficiency of estimators in a context of large dimension. They can often be expressed as a solution of regularized empirical risk minimization and generally lead to non differentiable optimization problems in the form of a sum of a smooth term, measuring the quality of the fit, and a non-smooth term, penalizing complex solutions. Although it has considerable advantages, such a way of including prior information, unfortunately introduces many numerical difficulties both for solving the underlying optimization problem and to calibrate the level of regularization. Solving these issues has been at the heart of this thesis. A recently introduced technique, called "Screening Rules", proposes to ignore some variables during the optimization process by benefiting from the expected sparsity of the solutions. These elimination rules are said to be safe when the procedure guarantees to not reject any variable wrongly. In this work, we propose a unified framework for identifying important structures in these convex optimization problems and we introduce the "Gap Safe Screening Rules". They allows to obtain significant gains in computational time thanks to the dimensionality reduction induced by this method. In addition, they can be easily inserted into iterative algorithms and apply to a large number of problems.To find a good compromise between minimizing risk and introducing a learning bias, (exact) homotopy continuation algorithms offer the possibility of tracking the curve of the solutions as a function of the regularization parameters. However, they exhibit numerical instabilities due to several matrix inversions and are often expensive in large dimension. Another weakness is that a worst-case analysis shows that they have exact complexities that are exponential in the dimension of the model parameter. Allowing approximated solutions makes possible to circumvent the aforementioned drawbacks by approximating the curve of the solutions. In this thesis, we revisit the approximation techniques of the regularization paths given a predefined tolerance and we propose an in-depth analysis of their complexity w.r.t. the regularity of the loss functions involved. Hence, we propose optimal algorithms as well as various strategies for exploring the parameters space. We also provide calibration method (for the regularization parameter) that enjoys globalconvergence guarantees for the minimization of the empirical risk on the validation data.Among sparse regularization methods, the Lasso is one of the most celebrated and studied. Its statistical theory suggests choosing the level of regularization according to the amount of variance in the observations, which is difficult to use in practice because the variance of the model is oftenan unknown quantity. In such case, it is possible to jointly optimize the regression parameter as well as the level of noise. These concomitant estimates, appeared in the literature under the names of Scaled Lasso or Square-Root Lasso, and provide theoretical results as sharp as that of theLasso while being independent of the actual noise level of the observations. Although presenting important advances, these methods are numerically unstable and the currently available algorithms are expensive in computation time. We illustrate these difficulties and we propose modifications based on smoothing techniques to increase stability of these estimators as well as to introduce a faster algorithm
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Venezia, Joseph. "VARIABLE RESOLUTION & DIMENSIONAL MAPPING FOR 3D MODEL OPTIMIZATION." Master's thesis, University of Central Florida, 2009. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2273.

Повний текст джерела
Анотація:
Three-dimensional computer models, especially geospatial architectural data sets, can be visualized in the same way humans experience the world, providing a realistic, interactive experience. Scene familiarization, architectural analysis, scientific visualization, and many other applications would benefit from finely detailed, high resolution, 3D models. Automated methods to construct these 3D models traditionally has produced data sets that are often low fidelity or inaccurate; otherwise, they are initially highly detailed, but are very labor and time intensive to construct. Such data sets are often not practical for common real-time usage and are not easily updated. This thesis proposes Variable Resolution & Dimensional Mapping (VRDM), a methodology that has been developed to address some of the limitations of existing approaches to model construction from images. Key components of VRDM are texture palettes, which enable variable and ultra-high resolution images to be easily composited; texture features, which allow image features to integrated as image or geometry, and have the ability to modify the geometric model structure to add detail. These components support a primary VRDM objective of facilitating model refinement with additional data. This can be done until the desired fidelity is achieved as practical limits of infinite detail are approached. Texture Levels, the third component, enable real-time interaction with a very detailed model, along with the flexibility of having alternate pixel data for a given area of the model and this is achieved through extra dimensions. Together these techniques have been used to construct models that can contain GBs of imagery data.
M.S.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Engineering MSCpE
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Robinson, Theresa Dawn 1978. "Surrogate-based optimization using multifidelity models with variable parameterization." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/39666.

Повний текст джерела
Анотація:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2007.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 131-138).
Engineers are increasingly using high-fidelity models for numerical optimization. However, the computational cost of these models, combined with the large number of objective function and constraint evaluations required by optimization methods, can render such optimization computationally intractable. Surrogate-based optimization (SBO) - optimization using a lower-fidelity model most of the time, with occasional recourse to the high-fidelity model - is a proven method for reducing the cost of optimization. One branch of SBO uses lower-fidelity physics models of the same system as the surrogate. Until now however, surrogates using a different set of design variables from that of the high-fidelity model have not been available to use in a provably convergent numerical optimization. New methods are herein developed and demonstrated to reduce the computational cost of numerical optimization of variableparameterization problems, that is, problems for which the low-fidelity model uses a different set of design variables from the high-fidelity model.
(cont.) Four methods are presented to perform mapping between variable-parameterization spaces, the last three of which are new: space mapping, corrected space mapping, a mapping based on proper orthogonal decomposition (POD), and a hybrid between POD mapping and space mapping. These mapping methods provide links between different models of the same system and have further applications beyond formal optimization strategies. On an unconstrained airfoil design problem, it achieved up to 40% savings in highfidelity function evaluations. On a constrained wing design problem it achieved 76% time savings, and on a bat flight design problem, it achieved 45% time savings. On a large-scale practical aerospace application, such time savings could represent weeks.
by Theresa D. Robinson.
Ph.D.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Golovidov, Oleg. "Variable-Complexity Approximations for Aerodynamic Parameters in Hsct Optimization." Thesis, Virginia Tech, 1997. http://hdl.handle.net/10919/36789.

Повний текст джерела
Анотація:
A procedure for generating and using polynomial approximations to the range or to the cruise drag components in terms of 29 design variables for the High Speed Civil Transport (HSCT) configuration and performance design is presented. Response surface model methodology is used to fit quadratic polynomials to data gathered from a series of numerical analyses of different HSCT designs. Several techniques are employed to minimize the number of required analyses and to maintain accuracy. Approximate analysis techniques are used to find regions of the design space where reasonable HSCT designs could occur and response surface models are built using higher fidelity analysis results of the designs in this "reasonable" region. Regression analysis and analysis of variance are then used to reduce the number of polynomial terms in the response surface model functions. Optimizations of the HSCT are then carried out both with and without the response surface models, and the effect of the use of the response surface models is discussed. Results of the work showed that considerable reduction of the amount of numerical noise in optimization is achieved with response surface models and the convergence rate was slightly improved. Careful attention was required to keep the accuracy of the models at an acceptable level. NOTE: (07/2012) An updated copy of this ETD was added after there were patron reports of problems with the file.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Thomas, George L. "Biogeography-Based Optimization of a Variable Camshaft Timing System." Cleveland State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=csu1419775790.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Lott, Eric M. "A Design and Optimization Methodology for Multi-Variable Systems." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1440274138.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Fouquet, Yoann. "Optimization methods for network design under variable link capacities." Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2233/document.

Повний текст джерела
Анотація:
Cette thèse porte sur l’optimisation des stratégies de reroutage dans les réseaux de télécommunications. Plus précisément, l’objectif est de proposer ou d’adapter des mécanismes permettant de router le trafic du réseau après une panne partielle, c’est-à-dire, après une baisse de la bande passante d’un ou plusieurs liens du réseau, tout en minimisant le coût de dimensionnement du réseau. Nos contributions principales sont la proposition de deux stratégies de protection/routage nommée Flow Thinning et Elastic Flow Rerouting. La thèse est organisée en trois parties. Dans la première partie, nous présentons la problématique de la thèse avant de passer en revue les stratégies de protection et reroutage de la littérature, leur modélisation et méthode de résolution. La deuxième partie présente en détails la première stratégie de protection appelée Flow-Thinning. Cette stratégie gère les pannes partielles en diminuant la bande passante de certain flots qui passent par le ou les arc(s) perturbés. Cela implique un surdimensionnement du routage nominal permettant d’assurer le trafic en cas de perturbations. La troisième et dernière partie concerne la deuxième stratégie de routage dénommée Elastic Flow Rerouting. Cette stratégie est un peu plus complexe que la première dans le sens où, en cas de panne, une distinction est faite entre les demandes perturbées ou non. Si une demande est perturbée, elle peu augmenter le trafic sur ces chemins. Si elle ne l’est pas, elle peut libérer de la bande passante sous la condition qu’elle ne devienne pas perturbée à son tour. Notons que ces deux stratégies sont assez difficiles du point de vue de leur complexité. Cette thèse a fait l’objet de divers travaux écrits : trois articles (acceptés ou en révision) dans des journaux (Fouquet et al. (2015b), Pióro et al. (2015), Shinko et al. (2015)), deux articles invités (Fouquet and Nace (2015), Fouquet et al. (2014c)) et huit articles dans des conférences internationales (Fouquet et al. (2015a; 2014d;a;b;e), Pióro et al. (2013b;a), Shinko et al. (2013)). Notons que Pióro et al. (2013b) a reçu le "Best Paper Award" de la conférence RNDM 2013. Pour finir, notons que cette thèse a été réalisée au laboratoire Heudiasyc de l’Université de Technologie de Compiègne (UTC). Elle a été financée par le Ministère de l’enseignement et de la recherche français3 avec le soutien du labex MS2T4 de l’UTC
This thesis summaries the work we have done in optimization of resilient communication networks. More specifically, the main goal is to propose appropriated recovery mechanisms for managing the demand traffic in a network under partial failures, i.e. when some part of the network (one or some links and/or nodes) is operational with reduced capacity. The main criterion in deciding the efficiency of the proposed recovery scheme is the dimensioning cost of the network while keeping the management cost at reasonable levels. Our main contribution is the design of two restoration strategies named Flow Thinning and Elastic Flow Rerouting. This document is organized in three main parts. In the first part, we present the problematic of the thesis. It includes an introduction on the protection and rerouting state-of-art strategies, together with their mathematical models and resolution methods. The second part presents in depth the first protection strategy named Flow Thinning. This strategy manages partial failures by decreasing appropriately the bandwidth on some flows routed through one of perturbed links. This implies overdimensionning of the network in the nominal state to ensure demand traffic in all failure states. The third and last part deals with the second rerouting strategy called Elastic Flow Rerouting. This strategy is a bit more complex than the first one because, in a failure state, we need to distinguish demands which are disturbed and the one which are not. If a demand is disturbed, it can increase the traffic on some of its paths. If it is not disturbed, it can release bandwidth on paths at the condition it remains non-disturbed. All this allows for further reducing the dimensioning cost but at a higher cost in terms of recovery process management. Note that the dimensioning problems for each strategy are shown to be NP-hard in their general form. The work of the thesis has been published in: three journal articles (Fouquet et al. (2015b), Pióro et al. (2015), Shinko et al. (2015)), two invited articles (Fouquet and Nace (2015), Fouquet et al. (2014c)) and height articles in international conferences (Fouquet et al. (2015a; 2014d;a;b;e), Pióro et al. (2013b;a), Shinko et al. (2013)). Note that Pióro et al. (2013b) has been rewarded by a "Best Paper Award" from the RNDM conference. To conclude, note that this thesis was realized in the Heudiasyc laboratory, from the Université de Technologie de Compiègne (UTC). It was financed by the French Ministry of Higher Education and Research1 with the support of the Labex MS2T2 of the UTC
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Socha, Krzysztof. "Ant colony optimization for continuous and mixed-variable domains." Doctoral thesis, Universite Libre de Bruxelles, 2008. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210533.

Повний текст джерела
Анотація:
In this work, we present a way to extend Ant Colony Optimization (ACO), so that it can be applied to both continuous and mixed-variable optimization problems. We demonstrate, first, how ACO may be extended to continuous domains. We describe the algorithm proposed, discuss the different design decisions made, and we position it among other metaheuristics.

Following this, we present the results of numerous simulations and testing. We compare the results obtained by the proposed algorithm on typical benchmark problems with those obtained by other methods used for tackling continuous optimization problems in the literature. Finally, we investigate how our algorithm performs on a real-world problem coming from the medical field—we use our algorithm for training neural network used for pattern classification in disease recognition.

Following an extensive analysis of the performance of ACO extended to continuous domains, we present how it may be further adapted to handle both continuous and discrete variables simultaneously. We thus introduce the first native mixed-variable version of an ACO algorithm. Then, we analyze and compare the performance of both continuous and mixed-variable

ACO algorithms on different benchmark problems from the literature. Through the research performed, we gain some insight into the relationship between the formulation of mixed-variable problems, and the best methods to tackle them. Furthermore, we demonstrate that the performance of ACO on various real-world mixed-variable optimization problems coming from the mechanical engineering field is comparable to the state of the art.
Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished

Стилі APA, Harvard, Vancouver, ISO та ін.
11

Huang, Deng. "Experimental planning and sequential kriging optimization using variable fidelity data." Connect to this title online, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1110297243.

Повний текст джерела
Анотація:
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xi, 120 p.; also includes graphics (some col.). Includes bibliographical references (p. 114-120). Available online via OhioLINK's ETD Center
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Moore, Roxanne Adele. "Variable fidelity modeling as applied to trajectory optimization for a hydraulic backhoe." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/28146.

Повний текст джерела
Анотація:
Thesis (M. S.)--Mechanical Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Paredis, Chris; Committee Member: Bras, Bert; Committee Member: Burkhart, Roger; Committee Member: Choi, Seung-Kyum.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Cho, Hyunkyoo. "Efficient variable screening method and confidence-based method for reliability-based design optimization." Diss., University of Iowa, 2014. https://ir.uiowa.edu/etd/4594.

Повний текст джерела
Анотація:
The objectives of this study are (1) to develop an efficient variable screening method for reliability-based design optimization (RBDO) and (2) to develop a new RBDO method incorporated with the confidence level for limited input data problems. The current research effort involves: (1) development of a partial output variance concept for variable screening; (2) development of an effective variable screening sequence; (3) development of estimation method for a confidence level of a reliability output; and (4) development of a design sensitivity method for the confidence level. In the RBDO process, surrogate models are frequently used to reduce the number of simulations because analysis of a simulation model takes a great deal of computational time. On the other hand, to obtain accurate surrogate models, we have to limit the dimension of the RBDO problem and thus mitigate the curse of dimensionality. Therefore, it is desirable to develop an efficient and effective variable screening method for reduction of the dimension of the RBDO problem. In this study, it is found that output variance is critical for identifying important variables in the RBDO process. A partial output variance, which is an efficient approximation method based on the univariate dimension reduction method (DRM), is proposed to calculate output variance efficiently. For variable screening, the variables that has larger partial output variances are selected as important variables. To determine important variables, hypothesis testing is used so that possible errors are contained at a user-specified error level. Also, an appropriate number of samples is proposed for calculating the partial output variance. Moreover, a quadratic interpolation method is studied in detail to calculate output variance efficiently. Using numerical examples, performance of the proposed variable screening method is verified. It is shown that the proposed method finds important variables efficiently and effectively. The reliability analysis and the RBDO require an exact input probabilistic model to obtain accurate reliability output and RBDO optimum design. However, often only limited input data are available to generate the input probabilistic model in practical engineering problems. The insufficient input data induces uncertainty in the input probabilistic model, and this uncertainty forces the RBDO optimum to lose its confidence level. Therefore, it is necessary to consider the reliability output, which is defined as the probability of failure, to follow a probability distribution. The probability of the reliability output is obtained with consecutive conditional probabilities of input distribution type and parameters using the Bayesian approach. The approximate conditional probabilities are obtained under reasonable assumptions, and Monte Carlo simulation is applied to practically calculate the probability of the reliability output. A confidence-based RBDO (C-RBDO) problem is formulated using the derived probability of the reliability output. In the C-RBDO formulation, the probabilistic constraint is modified to include both the target reliability output and the target confidence level. Finally, the design sensitivity of the confidence level, which is the new probabilistic constraint, is derived to support an efficient optimization process. Using numerical examples, the accuracy of the developed design sensitivity is verified and it is confirmed that C-RBDO optimum designs incorporate appropriate conservativeness according to the given input data.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Smith, Michael Henry. "Vehicle powertrain modeling and ratio optimization for a continuously variable transmission." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/17801.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Roth, Ronald B. "An experimental investigation and optimization of a variable reluctance spherical motor." Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/18913.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Le, Thanh Tam [Verfasser]. "Set optimization with respect to variable domination structures / Thanh Tam Le." Halle, 2018. http://d-nb.info/1172288275/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Alias, Abbas Younis. "New combined Conjugate Gradient and Variable Metric methods for unconstrained optimization." Thesis, University of Leeds, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.329233.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Takemiya, Tetsushi. "Aerodynamic design applying automatic differentiation and using robust variable fidelity optimization." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26515.

Повний текст джерела
Анотація:
Thesis (Ph.D)--Aerospace Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Mavris, Dimitri; Committee Member: Alley, Nicholas; Committee Member: Lakshmi, Sankar; Committee Member: Sriram, Rallabhandi; Committee Member: Stephen, Ruffin. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Duffield, Michael Luke. "Variable Fidelity Optimization with Hardware-in-the-Loop for Flapping Flight." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/3731.

Повний текст джерела
Анотація:
Hardware-in-the-loop (HIL) modeling is a powerful way of modeling complicated systems. However, some hardware is expensive to use in terms of time or mechanical wear. In cases like these, optimizing using the hardware can be prohibitively expensive because of the number of calls to the hardware that are needed. Variable fidelity optimization can help overcome these problems. Variable fidelity optimization uses less expensive surrogates to optimize an expensive system while calling it fewer times. The surrogates are usually created from performing a design of experiments on the expensive model and fitting a surface to the results. However, some systems are too expensive to create a surrogate from. One such case is that of a flapping flight model. In this thesis, a technique for variable fidelity optimization of HIL has been created that optimizes a system while calling it as few times as possible. This technique is referred to as an intelligent DOE. This intelligent DOE was tested using simple models of various dimension. It was then used to find a flapping wing trajectory that maximizes lift. Through testing, the intelligent DOE was shown to be able to optimize expensive systems with fewer calls than traditional variable fidelity optimization would have needed. Savings as high as 97% were recorded. It was noted that as the number of design variables increased, the intelligent DOE became more effective by comparison because the number of calls needed by a traditional DOE based variable fidelity optimization increased faster than linearly, where the number of hardware calls for the intelligent increased linearly.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Good, Matthew G. "Development of a Variable Camber Compliant Aircraft Tail using Structural Optimization." Thesis, Virginia Tech, 2004. http://hdl.handle.net/10919/33976.

Повний текст джерела
Анотація:
The objectives of the research presented in this thesis are the development of a seven degree-of-freedom morphing airplane and the design and integration of a variable camber compliant tail. The morphing airplane was designed and manufactured to study the benefits of large planform changes and flight control morphing. Morphing capabilities of each wing consist of 8 in. wing extension and contraction, 40° of wing sweep and ±20.25° of outboard wing twist in addition to 6 in. of tail extension and contraction. Initial wind-tunnel tests proved that for a large range of lift coefficients, the optimal airplane configuration changes to minimize the drag. Another portion of this research deals with the development of a structural optimization program to design a variable camber compliant tail. The program integrates ANSYS, aerodynamic thin airfoil theory and the Method of Moving Asymptotes to optimize the shape of an airfoil tail for maximum trailing edge deflection. An objective function is formulated to maximize the trailing edge tip deflection subject to stress constraints. The optimal structure needs to be flexible to maximize the tip deflection, but stiff enough to minimize the deflection of the tip due to aerodynamic loading. The results of the structural optimization program created a compliant tail mechanism that can deflect the trailing edge tip with a single actuator ±4.27°.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Lasseigne, Alexis. "Optimization of variable-thickness composite structures. Application to a CROR blade." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEM006/document.

Повний текст джерела
Анотація:
Cette thèse aborde la problématique de la conception optimale de structures composites stratifiées d’épaisseur variable. Les variables d’empilement définissent un problème d’optimisation combinatoire et des espaces de décisions de grande taille et potentiellement multimodaux. Les algorithmes d’optimisation stochastiques permettent de traiter ce type de problème et de tirer profit des performances et de l’anisotropie des plis composites pour l’allègement des structures composites stratifiées. Le but de cette étude est double : (i) développer un algorithme d’optimisation dédié aux composites stratifiés d’épaisseur variable et (ii) estimer le potentiel des composites stratifiés pour la maîtrise des performances aérodynamiques d’une pale de CROR composite.Dans la première partie de cette thèse, un algorithme évolutionnaire est spécialisé pour l’optimisation de tables de drapage et la gestion d’un ensemble de règles de conception représentatif des pratiques de l’industrie. Pour se faire, un encodage spécifique des solutions est proposé et des opérateurs de variations spécialisés sont développés.Dans la deuxième partie, l’algorithme est enrichi d’une technique de guidage basée sur l’exploitation d’un espace auxiliaire afin d'accroître son efficacité et d’intégrer davantage de connaissances des composites dans la résolution du problème.Finalement, la méthode est appliquée pour la conception d’une pale de CROR composite à l’échelle de la maquette de soufflerie. Au préalable, des processus itératifs de mise à froid et mise à chaud de la pale sont mis en place afin d’estimer la forme de la pale au repos et l’état de contraintes dans la pale en fonctionnement
This thesis deals with the optimal design of variable-thickness laminated composite structures. The stacking variables define a combinatorial optimization problem and large decision spaces which are potentially multimodal. Stochastic optimization algorithms allow solving this type of problem and allow taking advantage from the performance and the anisotropic nature of unidirectional composite plies to lighten laminated composite structures.The purpose of this study is twofold: (i) developing an optimization algorithm dedicated to variable-thickness laminated composites and (ii) assessing the potential of laminated composites in influencing the aerodynamic performances of a composite CROR blade.Firstly, an evolutionary algorithm is specialized in order to optimize layup tables and handle a set of design guidelines which is representative of industrial practices. In this purpose, a specific encoding of the solutions is suggested and specialized variation operators are developed.Secondly, the algorithm is enriched with a guiding technique based on the exploitation of an auxiliary space in order to improve its efficiency and to include further composites-related knowledge for the resolution of the problem.Finally, the method is applied for the design of a reduced-scale composite CROR blade intended for wind-tunnel testing. Beforehand, iterative processes are implemented to estimate the shape of the non-operating blade and the stress state within the operating blade
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Burgee, Susan L. "A coarse-grained variable-complexity MDO paradigm for HSCT design." Thesis, This resource online, 1995. http://scholar.lib.vt.edu/theses/available/etd-08142009-040544/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Guvey, Serkan. "Dynamic Simulation And Performance Optimization Of A Car With Continuously Variable Transmission." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/2/1095322/index.pdf.

Повний текст джерела
Анотація:
The continuously variable transmission (CVT), which has been in use in some of the vehicles in the market today, presents the possibility of decoupling the engine speed and the vehicle speed. By this way, it is now possible to operate the engine at its maximum efficient or performance point and fix it at that operating point without losing from the vehicle speed. Instead of using gears, which are the main transmission elements of conventional transmission, CVT uses two pulleys and a belt. By changing the pulley diameters, a continuously variable transmission ratio is obtained. Besides all its advantages, it has some big drawbacks like low efficiency, torque transmission ability and limited speed range. With developing technology, however, new solutions are developed to eliminate these drawbacks. In this study simulation models for the performance and fuel consumption of different types and arrangements of continuously variable transmission (CVT) systems are developed. Vehicles, which are equipped with two different arrangements of CVT and an automatic transmission, are modelled by using Matlab&
#8217
s simulation toolbox Simulink. By defining the required operating points for better acceleration performance and fuel consumption, and operating the engine at these points, performance optimization is satisfied. These transmissions are compared with each other according to their &
#8216
0-100 kph&
#8217
acceleration performances, maximum speeds, required time to travel 1000 m. and fuel consumptions for European driving cycles ECE and EUDC. These comparisons show that CVT systems are superior to automatic transmission, according to their acceleration and fuel consumption performances. CVTs also provide smoother driving, while they can eliminate jerks at gear shifting points.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Hutchison, Matthew Gerry. "Multidisciplinary optimization of high-speed civil transport configurations using variable-complexity modeling." Diss., This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-06062008-165715/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Arabnejad, Sajad. "Multiscale mechanics and multiobjective optimization of cellular hip implants with variable stiffness." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=119630.

Повний текст джерела
Анотація:
Bone resorption and bone-implant interface instability are two bottlenecks of current orthopaedic hip implant designs. Bone resorption is often triggered by mechanical bio-incompatibility of the implant with the surrounding bone. It has serious clinical consequences in both primary and revision surgery of hip replacements. After primary surgery, bone resorption can cause periprosthetic fractures, leading to implant loosening. For the revision surgery, the loss of bone stock compromises the ability of bone to adequately fix the implant. Interface instability, on the other hand, occurs as a result of excessive micromotion and stress at the bone-implant interface, which prevents implant fixation. As a result, the implant fails, and revision surgery is required.Many studies have been performed to design an implant minimizing both bone resorption and interface instability. However, the results have not been effective since minimizing one objective would penalize the other. As a result, among all designs available in the market, there is no implant that can concurrently minimize these two conflicting objectives. The goal of this thesis is to design an orthopaedic hip replacement implant that can simultaneously minimize bone resorption and implant instability. We propose a novel concept of a variable stiffness implant that is implemented through the use of graded lattice materials. A design methodology based on multiscale mechanics and multiobjective optimization is developed for the analysis and design of a fully porous implant with a lattice microstructure. The mechanical properties of the implant are locally optimized to minimize bone resorption and interface instability. Asymptotic homogenization (AH) theory is used to capture stress distribution for failure analysis throughout the implant and its lattice microstructure. For the implant lattice microstructure, a library of 2D cell topologies is developed, and their effective mechanical properties, including elastic moduli and yield strength, are computed using AH. Since orthopaedic hip implants are generally expected to support dynamic forces generated by human activities, they should be also designed against fatigue fracture to avoid progressive damage. A methodology for fatigue design of cellular materials is proposed and applied to a two dimensional implant, with Kagome and square cell topologies. A lattice implant with an optimum distribution of material properties is proved to considerably reduce the amount of bone resorption and interface shear stress compared to a fully dense titanium implant. The manufacturability of the lattice implants is demonstrated by fabricating a set of 2D proof-of-concept prototypes using Electron Beam Melting (EBM) with Ti6Al4V powder. Optical microscopy is used to measure the morphological parameters of the cellular microstructure. The numerical analysis and the manufacturability tests performed in this preliminary study suggest that the developed methodology can be used for the design and manufacturing of novel orthopaedic implants that can significantly contribute to reducing some clinical consequences of current implants.
La résorption osseuse et l'instabilité de l'interface os-implant sont deux goulots d'étranglement de modèles actuels d'implants orthopédiques de hanche. La résorption osseuse est souvent déclenchée par une bio-incompatibilité mécanique de l'implant avec l'os environnant. Il en résulte de graves conséquences cliniques à la fois en chirurgie primaire et en chirurgie de révision des arthroplasties de la hanche. Après la chirurgie primaire, la résorption osseuse peut entraîner des fractures périprothétiques, conduisant au descellement de l'implant. Pour la chirurgie de révision, la perte de substance osseuse compromet la capacité de l'os à bien fixer l'implant. L'instabilité de l'interface, d'autre part, se produit à la suite d'un stress excessif et de micromouvements à l'interface os-implant, ce qui empêche la fixation des implants. De ce fait, l'implant échoue, et la chirurgie de révision est nécessaire.De nombreuses études ont été réalisées pour concevoir un implant qui minimise la résorption osseuse et l'instabilité de l'interface. Cependant, les résultats n'ont pas été efficaces, car minimiser un objectif pénaliserait l'autre. En conséquence, parmi tous les modèles disponibles sur le marché, il n'y a pas d'implant qui puisse en même temps réduire ces deux objectifs contradictoires. L'objectif de cette thèse est de concevoir une prothèse orthopédique de la hanche qui puisse simultanément réduire la résorption osseuse et l'instabilité de l'implant. Nous proposons un nouveau concept d'implant à raideur variable qui est mis en œuvre grâce à l'utilisation de matériaux assemblés en treillis.Une méthodologie de conception basée sur la mécanique multi-échelle et l'optimisation multiobjectif est développé pour l'analyse et la conception d'un implant totalement poreux avec une microstructure en treillis. Les propriétés mécaniques de l'implant sont localement optimisés pour minimiser la résorption osseuse et l'instabilité d'interface. La théorie de l'homogénéisation asymptotique (HA) est utilisée pour capturer la distribution des contraintes pour l'analyse des défaillances tout le long de l'implant et de sa microstructure en treillis. Concernant cette microstructure en treillis, une bibliothèque de topologies de cellules 2D est développée, et leurs propriétés mécaniques efficaces, y compris les modules d'élasticité et la limite d'élasticité, sont calculées en utilisant le théorie HA. Puisque les prothèses orthopédiques de hanche sont généralement censées soutenir les forces dynamiques générées par les activités humaines, elles doivent être également conçues contre les fractures de fatigue pour éviter des dommages progressifs. Une méthodologie pour la conception en fatigue des matériaux cellulaires est proposée et appliquée à un implant en deux dimensions, et aux topologies de cellules carrées et de Kagome. Il est prouvé qu'un implant en treillis avec une répartition optimale des propriétés des matériaux réduit considérablement la quantité de la résorption osseuse et la contrainte de cisaillement de l'interface par rapport à un implant en titane totalement dense. La fabricabilité des implants en treillis est démontrée par la fabrication d'un ensemble de concepts de prototypes utilisant la fusion par faisceau d'électronsde poudre Ti6Al4V. La microscopie optique est utilisée pour mesurer les paramètres morphologiques de la microstructure cellulaire. L'analyse numérique et les tests de fabricabilité effectués dans cette étude préliminaire suggèrent que la méthodologie développée peut être utilisée pour la conception et la fabrication d'implants orthopédiques innovants qui peuvent contribuer de manière significative à la réduction des conséquences cliniques des implants actuels.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Onety, Renata da Encarnacao. "Multiobjective optimization of MPLS-IP networks with a variable neighborhood genetic algorithm." Universidade Federal de Minas Gerais, 2013. http://hdl.handle.net/1843/BUBD-9HTKE7.

Повний текст джерела
Анотація:
The demand for different levels of Quality of Service (QoS) in IP networks is growing, mainly to attend multimedia applications. However, not only indicators of quality have conflicting features, but also the problem of determining routes covered by more than two QoS constraints is NP-complete (Nondeterministic Polynomial Time Complete). This work proposes an algorithm to optimize multiple Quality of Service indices of Multi Protocol Label Switching (MPLS) IP networks. Such an approach aims at minimizing the network cost and the amount of simultaneous requests rejection, as well as performing load balancing among routes. The proposed algorithm, the Variable Neighborhood Multiobjective Genetic Algorithm (VN-MGA), is a Genetic Algorithm based on the Elitist Non-Dominated Sorted Genetic Algorithm (NSGA-II), with a particular feature that different parts of a solution are encoded differently, at Level 1 and Level 2. In order to improve results, both representations are needed. At Level 1, the first part of the solution is encoded by considering as decision variables the arrows that form the routes to be followed by each request (whilst the second part of the solution is kept constant), whereas at Level 2, the second part of the solution is encoded by considering the sequence of requests as decision variables, and first part is kept constant. Paretofronts obtained by VN-MGA dominate fronts obtained by fixed-neighborhood encoding schemes. Besides potential benefits of the proposed approach application to packet routing optimization in MPLS networks, this work raises the theoretical issue of the systematic application of variable encodings, which allow variable neighborhood searches, as operators inside general evolutionary computation algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Sakai, Tadashi. "A Study of Variable Thrust, Variable Specific Impulse Trajectories for Solar System Exploration." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4904.

Повний текст джерела
Анотація:
A study has been performed to determine the advantages and disadvantages of variable thrust and variable specific impulse (Isp) trajectories for solar system exploration. There have been several numerical research efforts for variable thrust, variable Isp, power-limited trajectory optimization problems. All of these results conclude that variable thrust, variable Isp (variable specific impulse, or VSI) engines are superior to constant thrust, constant Isp (constant specific impulse, or CSI) engines. However, most of these research efforts assume a mission from Earth to Mars, and some of them further assume that these planets are circular and coplanar. Hence they still lack the generality. This research has been conducted to answer the following questions: - Is a VSI engine always better than a CSI engine or a high thrust engine for any mission to any planet with any time of flight considering lower propellant mass as the sole criterion? - If a planetary swing-by is used for a VSI trajectory, is the fuel savings of a VSI swing-by trajectory better than that of a CSI swing-by or high thrust swing-by trajectory? To support this research, an unique, new computer-based interplanetary trajectory calculation program has been created. This program utilizes a calculus of variations algorithm to perform overall optimization of thrust, Isp, and thrust vector direction along a trajectory that minimizes fuel consumption for interplanetary travel. It is assumed that the propulsion system is power-limited, and thus the compromise between thrust and Isp is a variable to be optimized along the flight path. This program is capable of optimizing not only variable thrust trajectories but also constant thrust trajectories in 3-D space using a planetary ephemeris database. It is also capable of conducting planetary swing-bys. Using this program, various Earth-originating trajectories have been investigated and the optimized results have been compared to traditional CSI and high thrust trajectory solutions. Results show that VSI rocket engines reduce fuel requirements for any mission compared to CSI rocket engines. Fuel can be saved by applying swing-by maneuvers for VSI engines, but the effects of swing-bys due to VSI engines are smaller than that of CSI or high thrust engines.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Stanley, Andrew P. J. "Gradient-Based Layout Optimization of Large Wind Farms: Coupled Turbine Design, Variable Reduction, and Fatigue Constraints." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8692.

Повний текст джерела
Анотація:
Wind farm layout optimization can greatly improve wind farm performance. However, past wind farm design has been limited in several ways. Wind farm design usually assumes that all the turbines throughout the farm should be exactly the same. Oftentimes, the location of every turbine is optimized individually, which is computationally expensive. Furthermore, designers fail to consider turbine loads during layout optimization. This dissertation presents four studies which provide partial solutions to these limitations and greatly improve wind farm layout optimization. Two studies explore differing turbine designs in wind farms. In these studies, Wind farm layouts are optimized simultaneously with turbine design. We found that for small rotor diameters and closely spaced wind turbines, wind farms with different heights have a 5–10% reduction in cost of energy compared to farms with all the same turbine height. Coupled optimization of turbine layout and full turbine design results in an 2–5% reduction in cost of energy compared to optimizing sequentially for wind farms with turbine spacings of 8.5–11 rotor diameters. Wind farms with tighter spacing benefit even more from coupled optimization. Furthermore, we found that heterogeneous turbine design can produce up to an additional 10% cost of energy reduction compared to wind farms with identical turbines throughout the farm, especially when the wind turbines are closely spaced. The third study presents the boundary-grid parameterization method to reduce the computational expense of optimizing wind farms. This parameterization uses only five variables to define the layout of a wind farm with any number of turbines. For a 100 turbine wind farm, we show that optimizing the five variables of the boundary-grid method produces wind farms that perform just as well as farms where the location of each turbine is optimized individually, which requires 200 design variables. The presented method facilitates the study for both gradient-free and gradient-based optimization of large wind farms. The final study presents a model to calculate fatigue damage caused by partial waking on a wind turbine which is computationally efficient and can be included in wind farm layout optimization. Compared to high fidelity simulation data, the model accurately predicts the damage trends of various waking conditions. We also perform a wind farm layout optimization with the presented model in which we maximize the annual energy production of a wind farm while constraining the damage of each turbine. The results of the optimization show that the turbine damage can be constrained with only a very small sacrifice of less than 1% to the annual energy production.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Bae, Sangjune. "Variable screening method using statistical sensitivity analysis in RBDO." Thesis, University of Iowa, 2012. https://ir.uiowa.edu/etd/2817.

Повний текст джерела
Анотація:
A variable screening method is introduced to reduce the computational cost caused by the curse of dimension of high dimensional problem in RBDO. The screening method considers the output variance of the constraint functions and uses test-of-hypothesis to filter necessary variables. Also, the method is applicable to implicit functions as well as explicit functions. Suitable number of samples to obtain consistent test result is calculated. 3 examples are demonstrated with detailed variable screening procedure and RBDO result.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Grema, Alhaji Shehu. "Optimization of Reservoir Waterflooding." Thesis, Cranfield University, 2014. http://dspace.lib.cranfield.ac.uk/handle/1826/9263.

Повний текст джерела
Анотація:
Waterflooding is a common type of oil recovery techniques where water is pumped into the reservoir for increased productivity. Reservoir states change with time, as such, different injection and production settings will be required to lead the process to optimal operation which is actually a dynamic optimization problem. This could be solved through optimal control techniques which traditionally can only provide an open-loop solution. However, this solution is not appropriate for reservoir production due to numerous uncertain properties involved. Models that are updated through the current industrial practice of ‘history matching’ may fail to predict reality correctly and therefore, solutions based on history-matched models may be suboptimal or non-optimal at all. Due to its ability in counteracting the effects uncertainties, direct feedback control has been proposed recently for optimal waterflooding operations. In this work, two feedback approaches were developed for waterflooding process optimization. The first approach is based on the principle of receding horizon control (RHC) while the second is a new dynamic optimization method developed from the technique of self-optimizing control (SOC). For the SOC methodology, appropriate controlled variables (CVs) as combinations of measurement histories and manipulated variables are first derived through regression based on simulation data obtained from a nominal model. Then the optimal feedback control law was represented as a linear function of measurement histories from the CVs obtained. Based on simulation studies, the RHC approach was found to be very sensitive to uncertainties when the nominal model differed significantly from the conceived real reservoir. The SOC methodology on the other hand, was shown to achieve an operational profit with only 2% worse than the true optimal control, but 30% better than the open-loop optimal control under the same uncertainties. The simplicity of the developed SOC approach coupled with its robustness to handle uncertainties proved its potentials to real industrial applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Passos, Fábio Moreira de. "Modeling of integrated inductors for RF circuit design." Master's thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/11113.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Nataša, Krklec Jerinkić. "Line search methods with variable sample size." Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2014. http://dx.doi.org/10.2298/NS20140117KRKLEC.

Повний текст джерела
Анотація:
The problem under consideration is an unconstrained optimization problem with the objective function in the form of mathematical ex-pectation. The expectation is with respect to the random variable that represents the uncertainty. Therefore, the objective  function is in fact deterministic. However, nding the analytical form of that objective function can be very dicult or even impossible. This is the reason why the sample average approximation is often used. In order to obtain reasonable good approximation of the objective function, we have to use relatively large sample size. We assume that the sample is generated at the beginning of the optimization process and therefore we can consider this sample average objective function as the deterministic one. However, applying some deterministic method on that sample average function from the start can be very costly. The number of evaluations of the function under expectation is a common way of measuring the cost of an algorithm. Therefore, methods that vary the sample size throughout the optimization process are developed. Most of them are trying to determine the optimal dynamics of increasing the sample size.The main goal of this thesis is to develop the clas of methods that can decrease the cost of an algorithm by decreasing the number of function evaluations. The idea is to decrease the sample size whenever it seems to be reasonable - roughly speaking, we do not want to impose a large precision, i.e. a large sample size when we are far away from the solution we search for. The detailed description of the new methods is presented in Chapter 4 together with the convergence analysis. It is shown that the approximate solution is of the same quality as the one obtained by dealing with the full sample from the start.Another important characteristic of the methods that are proposed here is the line search technique which is used for obtaining the sub-sequent iterates. The idea is to nd a suitable direction and to search along it until we obtain a sucient decrease in the  function value. The sucient decrease is determined throughout the line search rule. In Chapter 4, that rule is supposed to be monotone, i.e. we are imposing strict decrease of the function value. In order to decrease the cost of the algorithm even more and to enlarge the set of suitable search directions, we use nonmonotone line search rules in Chapter 5. Within that chapter, these rules are modied to t the variable sample size framework. Moreover, the conditions for the global convergence and the R-linear rate are presented. In Chapter 6, numerical results are presented. The test problems are various - some of them are academic and some of them are real world problems. The academic problems are here to give us more insight into the behavior of the algorithms. On the other hand, data that comes from the real world problems are here to test the real applicability of the proposed algorithms. In the rst part of that chapter, the focus is on the variable sample size techniques. Different implementations of the proposed algorithm are compared to each other and to the other sample schemes as well. The second part is mostly devoted to the comparison of the various line search rules combined with dierent search directions in the variable sample size framework. The overall numerical results show that using the variable sample size can improve the performance of the algorithms signicantly, especially when the nonmonotone line search rules are used.The rst chapter of this thesis provides the background material for the subsequent chapters. In Chapter 2, basics of the nonlinear optimization are presented and the focus is on the line search, while Chapter 3 deals with the stochastic framework. These chapters are here to provide the review of the relevant known results, while the rest of the thesis represents the original contribution. 
U okviru ove teze posmatra se problem optimizacije bez ograničenja pri čcemu je funkcija cilja u formi matematičkog očekivanja. Očekivanje se odnosi na slučajnu promenljivu koja predstavlja neizvesnost. Zbog toga je funkcija cilja, u stvari, deterministička veličina. Ipak, odredjivanje analitičkog oblika te funkcije cilja može biti vrlo komplikovano pa čak i nemoguće. Zbog toga se za aproksimaciju često koristi uzoračko očcekivanje. Da bi se postigla dobra aproksimacija, obično je neophodan obiman uzorak. Ako pretpostavimo da se uzorak realizuje pre početka procesa optimizacije, možemo posmatrati uzoračko očekivanje kao determinističku funkciju. Medjutim, primena nekog od determinističkih metoda direktno na tu funkciju  moze biti veoma skupa jer evaluacija funkcije pod ocekivanjem često predstavlja veliki trošak i uobičajeno je da se ukupan trošak optimizacije meri po broju izračcunavanja funkcije pod očekivanjem. Zbog toga su razvijeni metodi sa promenljivom veličinom uzorka. Većcina njih je bazirana na odredjivanju optimalne dinamike uvećanja uzorka.Glavni cilj ove teze je razvoj algoritma koji, kroz smanjenje broja izračcunavanja funkcije, smanjuje ukupne trošskove optimizacije. Ideja je da se veličina uzorka smanji kad god je to moguće. Grubo rečeno, izbegava se koriscenje velike preciznosti  (velikog uzorka) kada smo daleko od rešsenja. U čcetvrtom poglavlju ove teze opisana je nova klasa metoda i predstavljena je analiza konvergencije. Dokazano je da je aproksimacija rešenja koju dobijamo bar toliko dobra koliko i za metod koji radi sa celim uzorkom sve vreme.Još jedna bitna karakteristika metoda koji su ovde razmatrani je primena linijskog pretražzivanja u cilju odredjivanja naredne iteracije. Osnovna ideja je da se nadje odgovarajući pravac i da se duž njega vršsi pretraga za dužzinom koraka koja će dovoljno smanjiti vrednost funkcije. Dovoljno smanjenje je odredjeno pravilom linijskog pretraživanja. U čcetvrtom poglavlju to pravilo je monotono što znači da zahtevamo striktno smanjenje vrednosti funkcije. U cilju jos većeg smanjenja troškova optimizacije kao i proširenja skupa pogodnih pravaca, u petom poglavlju koristimo nemonotona pravila linijskog pretraživanja koja su modifikovana zbog promenljive velicine uzorka. Takodje, razmatrani su uslovi za globalnu konvergenciju i R-linearnu brzinu konvergencije.Numerički rezultati su predstavljeni u šestom poglavlju. Test problemi su razliciti - neki od njih su akademski, a neki su realni. Akademski problemi su tu da nam daju bolji uvid u ponašanje algoritama. Sa druge strane, podaci koji poticu od stvarnih problema služe kao pravi test za primenljivost pomenutih algoritama. U prvom delu tog poglavlja akcenat je na načinu ažuriranja veličine uzorka. Različite varijante metoda koji su ovde predloženi porede se medjusobno kao i sa drugim šemama za ažuriranje veličine uzorka. Drugi deo poglavlja pretežno je posvećen poredjenju različitih pravila linijskog pretraživanja sa različitim pravcima pretraživanja u okviru promenljive veličine uzorka. Uzimajuci sve postignute rezultate u obzir dolazi se do zaključcka da variranje veličine uzorka može značajno popraviti učinak algoritma, posebno ako se koriste nemonotone metode linijskog pretraživanja.U prvom poglavlju ove teze opisana je motivacija kao i osnovni pojmovi potrebni za praćenje preostalih poglavlja. U drugom poglavlju je iznet pregled osnova nelinearne optimizacije sa akcentom na metode linijskog pretraživanja, dok su u trećem poglavlju predstavljene osnove stohastičke optimizacije. Pomenuta poglavlja su tu radi pregleda dosadašnjih relevantnih rezultata dok je originalni doprinos ove teze predstavljen u poglavljima 4-6.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Ford, Sean T. "Aerothermodynamic cycle design and optimization method for aircraft engines." Thesis, Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53006.

Повний текст джерела
Анотація:
This thesis addresses the need for an optimization method which can simultaneously optimize and balance an aerothermodynamic cycle. The method developed is be able to control cycle design variables at all operating conditions to meet the performance requirements while controlling any additional variables which may be used to optimize the cycle and maintaining all operating limits and engine constraints. The additional variables represent degrees of freedom above what is needed for conservation of mass and energy in the engine system. The motivation for such a method is derived from variable cycle engines, however it is general enough to use with most engine architectures. The method is similar to many optimization algorithms but differs in its implementation to an aircraft engine by combining the cycle balance and optimization using a Newton-Raphson cycle solver to efficiently find cycle designs for a wide range of engine architectures with extra degrees of freedom not needed to balance the cycle. Combination of the optimization with the cycle solver greatly speeds up the design and optimization process. A detailed process description for implementation of the method is provided as well as a proof of concept using several analytical test functions. Finally, the method is demonstrated on a separate flow turbofan model. Limitations and applications of the method are further explored including application to a multi-design point methodology.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Davis, Cleon. "Modeling, Optimization, Monitoring, and Control of Polymer Dielectric Curing by Variable Frequency Microwave Processing." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14550.

Повний текст джерела
Анотація:
The objectives of the proposed research are to model, optimize, and control variable frequency microwave (VFM) curing of polymer dielectrics. With an increasing demand for new materials and improved material properties, there is a corresponding demand for new material processing techniques that lead to comparable or better material properties than conventional methods. Presently, conventional thermal processing steps can take several hours. A new thermal processing technique known as variable frequency microwave curing can perform the same processing steps in minutes without compromising the intrinsic material properties. Current limitations in VFM processing include uncertain process characterization methods, lack of reliable temperature measuring techniques, and the lack of control over the various processes occurring in the VFM chamber. Therefore, the proposed research addressed these challenges by: (1) development of accurate empirical process models using statistical experimental design and neural networks; (2) recipe synthesis using genetic algorithms; (3) implementation of an acoustic temperature sensor for VFM process monitoring; and (4) implementation of neural control strategies for VFM processing. and #8194;
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Moore, Roxanne Adele. "Value-based global optimization." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44750.

Повний текст джерела
Анотація:
Computational models and simulations are essential system design tools that allow for improved decision making and cost reductions during all phases of the design process. However, the most accurate models are often computationally expensive and can therefore only be used sporadically. Consequently, designers are often forced to choose between exploring many design alternatives with less accurate, inexpensive models and evaluating fewer alternatives with the most accurate models. To achieve both broad exploration of the alternatives and accurate determination of the best alternative with reasonable costs incurred, surrogate modeling and variable accuracy modeling are used widely. A surrogate model is a mathematically tractable approximation of a more expensive model based on a limited sampling of that model, while variable accuracy modeling involves a collection of different models of the same system with different accuracies and computational costs. As compared to using only very accurate and expensive models, designers can determine the best solutions more efficiently using surrogate and variable accuracy models because obviously poor solutions can be eliminated inexpensively using only the less expensive, less accurate models. The most accurate models are then reserved for discerning the best solution from the set of good solutions. In this thesis, a Value-Based Global Optimization (VGO) algorithm is introduced. The algorithm uses kriging-like surrogate models and a sequential sampling strategy based on Value of Information (VoI) to optimize an objective characterized by multiple analysis models with different accuracies. It builds on two primary research contributions. The first is a novel surrogate modeling method that accommodates data from any number of analysis models with different accuracies and costs. The second contribution is the use of Value of Information (VoI) as a new metric for guiding the sequential sampling process for global optimization. In this manner, the cost of further analysis is explicitly taken into account during the optimization process. Results characterizing the algorithm show that VGO outperforms Efficient Global Optimization (EGO), a similar global optimization algorithm that is considered to be the current state of the art. It is shown that when cost is taken into account in the final utility, VGO achieves a higher utility than EGO with statistical significance. In further experiments, it is shown that VGO can be successfully applied to higher dimensional problems as well as practical engineering design examples.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Clerget, Charles-Henri. "Contributions au contrôle et à l'optimisation dynamique de systèmes à retards variables." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEM053/document.

Повний текст джерела
Анотація:
Dans cette thèse, nous avons étudié le contrôle et l'optimisation de systèmes dynamiques sujets à des retards variables.L'existence de retards, de commande ou d'état, est un problème classique en automatique, susceptible de réduire les performances du système en régime transitoire, voire de remettre en cause la stabilité de contrôleurs en boucle fermée. De tels phénomènes de retards variables jouent un rôle important dans de nombreuses applications en génie des procédés.Dans une première partie, nous avons étudié la régulation en boucle fermée d'un système soumis à des retards de métrologie variables et incertains. Nous avons établi de nouveaux résultats garantissant la stabilité robuste sous certaines conditions explicites sur le gain du contrôleur. Dans une seconde partie, nous avons abordé le problème de l'optimisation dynamique de systèmes présentant des retards variables dépendant de la commande liés à des phénomènes de transport dans des réseaux hydrauliques. Nous avons proposé un algorithme itératif d'optimisation et garanti sa convergence grâce à une analyse détaillée
This Ph.D. work studied the control and optimization of dynamical systems subject to varying time delays.State and control time delays are a well-known problem in control theory, with a potential to decrease performances during transient regimes, or even to jeopardize controllers closed-loop stability. Such variable delays play a key role in many applications in process industries.In a first part, we studied the closed-loop control of a system subject to varying and uncertain metrology delays. We established new results on robust stability under explicit conditions on the controller gain. In a second part, we tackled the problem of the dynamic optimization of systems exhibiting input dependent delays due to transport phenomena in complex hydraulic architectures. We designed an iterative optimization algorithm and guaranteed its convergence through a detailed analysis
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Ollar, Jonathan. "A multidisciplinary design optimisation framework for structural problems with disparate variable dependence." Thesis, Queen Mary, University of London, 2017. http://qmro.qmul.ac.uk/xmlui/handle/123456789/24715.

Повний текст джерела
Анотація:
Multidisciplinary design optimisation incorporates several disciplines in one integrated optimisation problem. The benefi t of considering all requirements at once rather than in individual optimisations is that synergies between disciplines can be exploited to fi nd superior designs to what would otherwise be possible. The main obstacle for the use of multidisciplinary design optimisation in an industrial setting is the related computational cost which may become prohibitively large. This work is focused on the development of a multidisciplinary design optimisation framework that extends the existing trust-region based optimisation method known as the mid-range approximation method. The main novel contribution is an approach to solving multidisciplinary design optimisation problems using metamodels built in sub-spaces of the design variable space. Each metamodel is built in the sub-space relevant to the corresponding discipline while the optimisation problem is solved in the full design variable space. Since the metamodels are built in a space of reduced dimensionality, the computational budget for building them can be reduced without compromising their quality. Furthermore, a method for efficiently building kriging metamodels is proposed. This is done by means of a two-step hyper parameter tuning strategy. The fi rst step is a line search where the set of tuning parameters is treated as a single variable. The solution of the fi rst step is used in the second step, a gradient based hyper parameter optimisation where partial derivatives are obtained using the adjoint method. The framework is demonstrated on two examples, a multidisciplinary design optimisation of a thin-walled beam section subject to static and impact requirements, and a multidisciplinary design optimisation of an aircraft wing subject to static and bird strike requirements. In both cases the developed technique demonstrates a reduced computational effort compared to what would typically be achieved by existing methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Wood, Derrin W. "A Discourse concerning certain stochastic optimization algorhitms and their application to the imaging of cataclysmic variable stars." Pretoria : [s.n.], 2004. http://upetd.up.ac.za/thesis/available/etd-07272005-133840.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Piana, Sabine [Verfasser]. "Evolutionary Optimization of the Operation of Pipeless Plants with Variable Transfer Times / Sabine Piana." Aachen : Shaker, 2012. http://d-nb.info/1066196907/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Soleimani, Behnam [Verfasser], Christiane [Akademischer Betreuer] Tammer, and Akhtar [Akademischer Betreuer] Khan. "Vector optimization problems with variable ordering structures / Behnam Soleimani. Betreuer: Christiane Tammer ; Akhtar Khan." Halle, Saale : Universitäts- und Landesbibliothek Sachsen-Anhalt, 2015. http://d-nb.info/1069814768/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Vesel, Richard Jr. "Optimization of a wind turbine rotor with variable airfoil shape via a genetic algorithm." Connect to resource, 2009. http://hdl.handle.net/1811/44504.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Hamberg, Robin. "Optimization of FreeValve´s fully variable valve control system for a four-cylinder engine." Thesis, KTH, Maskinkonstruktion (Inst.), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-215987.

Повний текст джерела
Анотація:
Automotive exhaust legislation is stricter than ever and manufacturers are spending increasing amounts of resources on reducing fuel consumption and emissions from their vehicles. What most do not try is to change one of the most fundamental concepts of the internal combustion engine. The camshaft is used to control the poppet valves in the gas exchange process and has, despite many variations, stayed more or less the same the last century. The largest disadvantage with the camshaft is that the optimum gas exchange process is dynamic with engine speed and load, where the camshaft is not. Variable valve timing as many manufacturers explored reduces the problem but in order to get the real advantages a fully variable valve timing system is needed. FreeValve AB is a small Swedish company that develop such system which is based on a Pneumatic Hydraulic Electric Actuator to operate each valve. The aim of this Master’s thesis was to re-design the FreeValve electric valve control system to suit new and updated requirements of running a four-cylinder engine in a car. The circuit design process of the system is presented along with a literature study to identify design considerations for a future valve control system prototype to operate in the demanding environment of an engine compartment. A requirement specification was established and it was verified that the parts of the system within the scope of the project were successfully designed according to the literature review and specification, with the main task to deliver the correct current profile to the valve actuators.
Bilindustrins avgaskrav är striktare än någonsin och tillverkare spenderar allt större resurser på att sänka bränsleförbrukning och avgasutsläpp från sina bilar. Något de flesta inte provar är att ändra en av grundprinciperna i förbränningsmotorn. Kamaxlar används för att styra ventilerna i gasväxlingsprocessen och har trots många variationer sett mer eller mindre likadana ut det senaste århundradet. Den största nackdelen med kamaxeln är att den optimala gasväxlingsprocessen varierar med motorvarvtal och last, men det gör inte kamaxeln. Variabel ventilstyrning som många tillverkare använder sig av minskar problemen men för att få ut mesta möjliga prestanda krävs en fullt variabel ventilstyrning. FreeValve AB är ett litet svenskt företag som utvecklar ett sådant system som använder en Pneumatisk Hydraulisk Elektrisk Aktuator för att styra varje ventil. Syftet med examensarbetet var att designa om FreeValves elektriska ventilstyrsystem för att passa nya krav på att styra en fyrcylindrig motor i en bil. Designprocessen av kretslösningarna presenteras tillsammans med en litteraturstudie som identifierar designaspekter till en kommande prototyp av ventilstyrsystemet som ska användas i ett motorrums krävande miljö. En kravspecifikation sammanställdes och det verifierades att delarna av systemet som rymdes i projektets omfång var designade och fungerande enligt ställda krav och enligt litteraturstudiens förslag, med huvudsyfte att styra aktuatorerna med lämplig strömprofil.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Park, Jangho. "Efficient Global Optimization of Multidisciplinary System using Variable Fidelity Analysis and Dynamic Sampling Method." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/91911.

Повний текст джерела
Анотація:
Work in this dissertation is motivated by reducing the design cost at the early design stage while maintaining high design accuracy throughout all design stages. It presents four key design methods to improve the performance of Efficient Global Optimization for multidisciplinary problems. First, a fidelity-calibration method is developed and applied to lower-fidelity samples. Function values analyzed by lower fidelity analysis methods are updated to have equivalent accuracy to that of the highest fidelity samples, and these calibrated data sets are used to construct a variable-fidelity Kriging model. For the design of experiment (DOE), a dynamic sampling method is developed and includes filtering and infilling data based on mathematical criteria on the model accuracy. In the sample infilling process, multi-objective optimization for exploitation and exploration of design space is carried out. To indicate the fidelity of function analysis for additional samples in the variable-fidelity Kriging model, a dynamic fidelity indicator with the overlapping coefficient is proposed. For the multidisciplinary design problems, where multiple physics are tightly coupled with different coupling strengths, multi-response Kriging model is introduced and utilizes the method of iterative Maximum Likelihood Estimation (iMLE). Through the iMLE process, a large number of hyper-parameters in multi-response Kriging can be calculated with great accuracy and improved numerical stability. The optimization methods developed in the study are validated with analytic functions and showed considerable performance improvement. Consequentially, three practical design optimization problems of NACA0012 airfoil, Multi-element NLR 7301 airfoil, and all-moving-wingtip control surface of tailless aircraft are performed, respectively. The results are compared with those of existing methods, and it is concluded that these methods guarantee the equivalent design accuracy at computational cost reduced significantly.
Doctor of Philosophy
In recent years, as the cost of aircraft design is growing rapidly, and aviation industry is interested in saving time and cost for the design, an accurate design result during the early design stages is particularly important to reduce overall life cycle cost. The purpose of the work to reducing the design cost at the early design stage with design accuracy as high as that of the detailed design. The method of an efficient global optimization (EGO) with variable-fidelity analysis and multidisciplinary design is proposed. Using the variable-fidelity analysis for the function evaluation, high fidelity function evaluations can be replaced by low-fidelity analyses of equivalent accuracy, which leads to considerable cost reduction. As the aircraft system has sub-disciplines coupled by multiple physics, including aerodynamics, structures, and thermodynamics, the accuracy of an individual discipline affects that of all others, and thus the design accuracy during in the early design states. Four distinctive design methods are developed and implemented into the standard Efficient Global Optimization (EGO) framework: 1) the variable-fidelity analysis based on error approximation and calibration of low-fidelity samples, 2) dynamic sampling criteria for both filtering and infilling samples, 3) a dynamic fidelity indicator (DFI) for the selection of analysis fidelity for infilled samples, and 4) Multi-response Kriging model with an iterative Maximum Likelihood estimation (iMLE). The methods are validated with analytic functions, and the improvement in cost efficiency through the overall design process is observed, while maintaining the design accuracy, by a comparison with existing design methods. For the practical applications, the methods are applied to the design optimization of airfoil and complete aircraft configuration, respectively. The design results are compared with those by existing methods, and it is found the method results design results of accuracies equivalent to or higher than high-fidelity analysis-alone design at cost reduced by orders of magnitude.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Krumpe, Norman Joseph. "A COMPARISON OF SIMULATION OPTIMIZATION TECHNIQUES IN SOLVING SINGLE-OBJECTIVE, CONSTRAINED, DISCRETE VARIABLE PROBLEMS." Miami University / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=miami1129749397.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Zhang, Botao. "Design of Variable-Density Structures for Additive Manufacturing Using Gyroid Lattices." University of Cincinnati / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1535374427634743.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Lim, Churlzu. "Nondifferentiable Optimization of Lagrangian Dual Formulations for Linear Programs with Recovery of Primal Solutions." Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/28144.

Повний текст джерела
Анотація:
This dissertation is concerned with solving large-scale, ill-structured linear programming (LP) problems via Lagrangian dual (LD) reformulations. A principal motivation for this work arises in the context of solving mixed-integer programming (MIP) problems where LP relaxations, sometimes in higher dimensional spaces, are widely used for bounding and cut-generation purposes. Often, such relaxations turn out to be large-sized, ill-conditioned problems for which simplex as well as interior point based methods can tend to be ineffective. In contrast, Lagrangian relaxation or dual formulations, when applied in concert with suitable primal recovery strategies, have the potential for providing quick bounds as well as enabling useful branching mechanisms. However, the objective function of the Lagrangian dual is nondifferentiable, and hence, we cannot apply popular gradient or Hessian-based optimization techniques that are commonly used in differentiable optimization. Moreover, the subgradient methods that are popularly used are typically slow to converge and tend to stall while yet remote from optimality. On the other hand, more advanced methods, such as the bundle method and the space dilation method, involve additional computational and storage requirements that make them impractical for large-scale applications. Furthermore, although we might derive an optimal or near-optimal solution for LD, depending on the dual-adequacy of the methodology used, a primal solution may not be available. While some algorithmically simple primal solution recovery schemes have been developed in theory to accompany Lagrangian dual optimization, their practical performance has been disappointing. Rectifying these inadequacies is a challenging task that constitutes the focal point for this dissertation. Many practical applications dealing with production planning and control, engineering design, and decision-making in different operational settings fall within the purview of this context and stand to gain by advances in this technology. With this motivation, our primary interests in the present research effort are to develop effective nondifferentiable optimization (NDO) methods for solving Lagrangian duals of large-sized linear programs, and to design practical primal solution recovery techniques. This contribution would then facilitate the derivation of quick bounds/cuts and branching mechanisms in the context of branch-and-bound/cut methodologies for solving mixed-integer programming problems. We begin our research by adapting the Volume Algorithm (VA) of Barahona and Anbil (2000) developed at IBM as a direction-finding strategy within the variable target value method (VTVM) of Sherali et al. (2000). This adaptation makes VA resemble a deflected subgradient scheme in contrast with the bundle type interpretation afforded by the modification of VA as proposed by Bahiense et al. (2002). Although VA was originally developed in order to recover a primal optimal solution, we first present an example to demonstrate that it might indeed converge to a nonoptimal primal solution. However, under a suitable condition on the geometric moving average factor, we establish the convergence of the proposed algorithm in the dual space. A detailed computational study reveals that this approach yields a competitive procedure as compared with alternative strategies including the average direction strategy (ADS) of Sherali and Ulular (1989), a modified Polyak-Kelley cutting-plane strategy (PKC) designed by Sherali et al. (2001), and the modified Volume Algorithm routines RVA and BVA proposed by Bahiense et al. (2002), all embedded within the same VTVM framework. As far as CPU times are concerned, the VA strategy consumed the least computational effort for most problems to attain a near-optimal objective value. Moreover, the VA, ADS, and PKC strategies revealed considerable savings in CPU effort over a popular commercial linear program solver, CPLEX Version 8.1, when used to derive near-optimal solutions. Next, we consider two variable target value methods, the Level Algorithm of Brännlund (1993) and VTVM, which require no prior knowledge of upper bounds on the optimal objective value while guaranteeing convergence to an optimal solution. We evaluate these two algorithms in conjunction with various direction-finding and step-length strategies such as PS, ADS, VA, and PKC. Furthermore, we generalize the PKC strategy by further modifying the cut's right-hand-side values and additionally performing sequential projections onto some previously generated Polyak-Kelley's cutting-planes. We call this a generalized PKC (GPKC) strategy. Moreover, we point out some latent deficiencies in the two aforementioned variable target value algorithms in regard to their target value update mechanisms, and we suggest modifications in order to alleviate these shortcomings. We further explore an additional local search procedure to strengthen the performance of the algorithms. Noting that no related convergence analyses have been presented, we prove the convergence of the Level Algorithm when used in conjunction with the ADS, VA, or GPKC schemes. We also establish the convergence of VTVM when employing GPKC. Based on our computational study, the modified VTVM algorithm produced the best quality solutions when implemented with the GPKC strategy, where the latter performs sequential projections onto the four most recently generated Polyak-Kelley cutting-planes as available. Also, we demonstrate that the proposed modifications and the local search technique significantly improve the overall performance. Moreover, the VTVM procedure was observed to consistently outperform the Level Algorithm as well as a popular heuristic subgradient method of Held et al. (1974) that is widely used in practice. As far as CPU times are concerned, the modified VTVM procedure in concert with the GPKC strategy revealed the best performance, providing near-optimal solutions in about 27.84% of the effort at an average as that required by CPLEX 8.1 to produce the same quality solutions. We next consider the Lagrangian dual of a bounded-variable equality constrained linear programming problem. We develop two novel approaches for solving this problem, which attempt to circumvent or obviate the nondifferentiability of the objective function. First, noting that the Lagrangian dual function is differentiable almost everywhere, whenever the NDO algorithm encounters a nondifferentiable point, we employ a proposed perturbation technique (PT) in order to detect a differentiable point in the vicinity of the current solution from which a further search can be conducted. In a second approach, called the barrier-Lagrangian dual reformulation (BLR) method, the primal problem is reformulated by constructing a barrier function for the set of bounding constraints such that an optimal solution to the original problem can be recovered by suitably adjusting the barrier parameter. However, instead of solving the barrier problem itself, we dualize the equality constraints to formulate a Lagrangian dual function, which is shown to be twice differentiable. Since differentiable pathways are made available via these two proposed techniques, we can advantageously utilize differentiable optimization methods along with popular conjugate gradient schemes. Based on these constructs, we propose an algorithmic procedure that consists of two sequential phases. In Phase I, the PT and BLR methods along with various deflected gradient strategies are utilized, and then, in Phase II, we switch to the modified VTVM algorithm in concert with GPKC (VTVM-GPKC) that revealed the best performance in the previous study. We also designed two target value initialization methods to commence Phase II, based on the output from Phase I. The computational results reveal that Phase I indeed helps to significantly improve the algorithmic performance as compared with implementing VTVM-GPKC alone, even though the latter was run for twice as many iterations as used in the two-phase procedures. Among the implemented procedures, the PT method in concert with certain prescribed deflection and Phase II initialization schemes yielded the best overall quality solutions and CPU time performance, consuming only 3.19% of the effort as that required by CPLEX 8.1 to produce comparable solutions. Moreover, we also tested some ergodic primal recovery strategies with and without employing BLR as a warm-start, and observed that an initial BLR phase can significantly enhance the convergence of such primal recovery schemes. Having observed that the VTVM algorithm requires the fine-tuning of several parameters for different classes of problems in order to improve its performance, our next research investigation focuses on developing a robust variable target value framework that entails the management of only a few parameters. We therefore design a novel algorithm, called the Trust Region Target Value (TRTV) method, in which a trust region is constructed in the dual space, and its center and size are adjusted in a manner that eventually induces a dual optimum to lie at the center of the hypercube trust region. A related convergence analysis has also been conducted for this procedure. We additionally examined a variation of TRTV, where the hyperrectangular trust region is more effectively adjusted for normalizing the effects of the dual variables. In our computational study, we compared the performance of TRTV with that of the VTVM-GPKC procedure. For four direction-finding strategies (PS, VA, ADS, and GPKC), the TRTV algorithm consistently produced better quality solutions than did VTVM-GPKC. The best performance was obtained when TRTV was employed in concert with the PS strategy. Moreover, we observed that the solution quality produced by TRTV was consistently better than that obtained via VTVM, hence lending a greater degree of robustness. As far as computational effort is concerned, the TRTV-PS combination consumed only 4.94% of the CPU time required by CPLEX 8.1 at an average in order to find comparable quality solutions. Therefore, based on our extensive set of test problems, it appears that the TRTV along with the PS strategy is the best and the most robust procedure among those tested. Finally, we explore an outer-linearization (or cutting-plane) method along with a trust region approach for refining available dual solutions and recovering a primal optimum in the process. This method enables us to escape from a jamming phenomenon experienced at a non-optimal point, which commonly occurs when applying NDO methods, as well as to refine the available dual solution toward a dual optimum. Furthermore, we can recover a primal optimal solution when the resulting dual solution is close enough to a dual optimum, without generating a potentially excessive set of constraints. In our computational study, we tested two such trust region strategies, the Box-step (BS) method of Marsten et al. (1975) and a new Box Trust Region (BTR) approach, both appended to the foregoing TRTV-PS dual optimization methodology. Furthermore, we also experimented with deleting nonbinding constraints when the number of cuts exceeds a prescribed threshold value. This proposed refinement was able to further improve the solution quality, reducing the near-zero relative optimality gap for TRTV-PS by 20.6-32.8%. The best strategy turned out to be using the BTR method while deleting nonbinding constraints (BTR-D). As far as the recovery of optimal solutions is concerned, the BTR-D scheme resulted in the best measure of primal feasibility, and although it was terminated after it had accumulated only 50 constraints, it revealed a better performance than the ergodic primal recovery scheme of Shor (1985) that was run for 2000 iterations while also assuming knowledge of the optimal objective value in the dual scheme. In closing, we mention that there exist many optimization methods for complex systems such as communication network design, semiconductor manufacturing, and supply chain management, that have been formulated as large-sized mixed-integer programs, but for which deriving even near-optimal solutions has been elusive due to their exorbitant computational requirements. Noting that the computational effort for solving mixed-integer programs via branch-and-bound/cut methods strongly depends on the effectiveness with which the underlying linear programming relaxations can be solved, applying theoretically convergent and practically effective NDO methods in concert with efficient primal recovery procedures to suitable Lagrangian dual reformulations of these relaxations can significantly enhance the overall computational performance of these methods. We therefore hope that the methodologies designed and analyzed in this research effort will have a notable positive impact on analyzing such complex systems.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Xu, Qing. "Flexible Radio Resource Management for Multicast Multimedia Service Provision : Modeling and Optimization." Thesis, Belfort-Montbéliard, 2014. http://www.theses.fr/2014BELF0237/document.

Повний текст джерела
Анотація:
Le conflit entre la demande de services multimédia en multidiffusion à haut débit (MBMS) et les limites en ressources radio demandent une gestion efficace de l'allocation des ressources radio (RRM) dans les réseaux 3G UMTS. À l'opposé des travaux existant dans ce domaine, cette thèse se propose de résoudre le problème de RRM dans les MBMS par une approche d’optimisation combinatoire. Le travail commence par une modélisation formelle du problème cible, désigné comme Flexible Radio Resource Management Model (F2R2M). Une analyse de la complexité et du paysage de recherche est effectuée à partir de ce modèle. Tout d’abord on montre qu'en assouplissant les contraintes de code OVSF, le problème de RRM pour les MBMS peut s'apparenter à un problème de sac à dos à choix multiples (MCKP). Une telle constatation permet de calculer les limites théoriques de la solution en résolvant le MCKP similaire. En outre, l'analyse du paysage montre que les espaces de recherche sont accidentés et constellés d'optima locaux. Sur la base de cette analyse, des algorithmes métaheuristiques sont étudiés pour résoudre le problème. Nous montrons tout d'abord que un Greedy Local Search (GLS) et un recuit simulé (SA) peuvent trouver de meilleures solutions que les approches existantes implémentées dans le système UMTS, mais la multiplicité des optima locaux rend les algorithmes très instables. Un algorithme de recherche tabou (TS) incluant une recherche à voisinage variable (VNS) est aussi développé et comparé aux autres algorithmes (GLS et SA) et aux approches actuelles du système UMTS ; les résultats de la recherche tabou dépassent toutes les autres approches. Enfin les meilleures solutions trouvées par TS sont également comparées avec les solutions théoriques générées par le solveur MCKP. On constate que les meilleures solutions trouvées par TS sont égales ou très proches des solutions optimales théoriques
The high throughputs supported by the multimedia multicast services (MBMS) and the limited radio resources result in strong requirement for efficient radio resource management (RRM) in UMTS 3G networks. This PhD thesis proposes to solve the MBMS RRM problem as a combinatorial optimization problem. The work starts with a formal modeling of the problem, named as the Flexible Radio Resource Management Model (F2R2M). An in-depth analysis of the problem complexity and the search landscape is done from the model. It is showed that, by relaxing the OVSF code constraints, the MBMS RRM problem can be approximated as a Multiple-Choice Knapsack Problem (MCKP). Such work allows us to compute the theoretical solution bounds by solving the approximated MCKP. Then the fitness landscape analysis shows that the search spaces are rough and reveal several local optimums. Based on the analysis, some metaheuristic algorithms are studied to solve the MBMS RRM problem. We first show that a Greedy Local Search (GLS) and a Simulated Annealing (SA) allow us to find better solutions than the existing approaches implemented in the UMTS system, however the results are instable due to the landscape roughness. Finally we have developed a Tabu Search (TS) mixed with a Variable Neighborhood Search (VNS) algorithm and we have compared it with GLS, SA and UMTS embedded algorithms. Not only the TS outperforms all the other approaches on several scenarios but also, by comparing it with the theoretical solution bounds generated by the MCKP solver, we observe that TS is equal or close to the theoretical optimal solutions
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Svilan, Vjekoslav. "Analysis, optimization, and modeling of CMOS circuits incorporating variable supply voltage and adaptive body bias /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Žatko, Miroslav. "Optimization of the Stator Vane Aerodynamic Loading for a Turbocharger with a Variable Nozzle Turbine." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-234359.

Повний текст джерела
Анотація:
Tato práce se zabývá problematikou aerodynamického zatížení statorových lopatek turbodmychadla s variabilní geometrií turbíny a jeho následnou optimalizací. Metody výpočtového modelování tekutin jsou aplikovány s využitím komerčního softwaru ANSYS CFX. Výpočtový model celého turbínového stupně je použit pro analýzu aerodynamického zatížení statorových lopatek v několika polohách a pro různé operační podmínky. Provedená byla detailní analýza vlivu rozložení tlaku v turbínové skříni, úhlu natočení lopatky, jakož i vlivu distančních pinů na aerodynamické zatížení. Následně bylo vyvinuto experimentální zařízení pro přímé měření aerodynamického momentu statorových lopatek s využitím testovacího zařízení s názvem Gas Stand. Toto zařízení spaluje zemní plyn a dokáže vytvořit velmi stabilní podmínky proudění při vysokých teplotách, což umožňuje vyloučit vliv pulzací plynu, vibrací motoru, jakož i vlivu řídící strategie motoru na měřenou veličinu. Výsledky experimentu jsou následně porovnány s vypočtenou hodnotou pomocí CFD modelu a je dosažená velmi dobrá shoda. Validovaný CFD model je následně zredukován s využitím podmínek cyklické symetrie na model jen jednoho segmentu statoru a rotoru. Umožňuje to výrazně zvýšit produktivitu simulací a prozkoumat několik návrhových parametrů statoru v celém rozsahu pohybu statorových lopatek. Provedená analýza citlivosti těchto parametrů položila výborný základ pro jejich následnou optimalizaci a ukázala významný potenciál několika z nich. Na základě analýzy požadavků na aerodynamické zatížení statorových lopatek byla následně vytvořena definice ideálního zatížení, která byla ustavena jako cíl pro jeho optimalizaci. Použitých bylo několik optimalizačních strategií s využitím metody analýzy působících silových vektorů a jejich výsledky byly následně zhodnoceny a porovnány z více aspektů. Výsledné optimalizované řešení bylo následně přepočteno pomocí modelu celého turbínového stupně, čímž se prokázali jeho výborné vlastnosti z hlediska aerodynamického zatížení a zvýšení účinnosti ve spodní části charakteristiky.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Chen, Xiaohui. "Comparisons of statistical modeling for constructing gene regulatory networks." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/4068.

Повний текст джерела
Анотація:
Genetic regulatory networks are of great importance in terms of scientific interests and practical medical importance. Since a number of high-throughput measurement devices are available, such as microarrays and sequencing techniques, regulatory networks have been intensively studied over the last decade. Based on these high-throughput data sets, statistical interpretations of these billions of bits are crucial for biologist to extract meaningful results. In this thesis, we compare a variety of existing regression models and apply them to construct regulatory networks which span trancription factors and microRNAs. We also propose an extended algorithm to address the local optimum issue in finding the Maximum A Posterjorj estimator. An E. coli mRNA expression microarray data set with known bona fide interactions is used to evaluate our models and we show that our regression networks with a properly chosen prior can perform comparably to the state-of-the-art regulatory network construction algorithm. Finally, we apply our models on a p53-related data set, NCI-60 data. By further incorporating available prior structural information from sequencing data, we identify several significantly enriched interactions with cell proliferation function. In both of the two data sets, we select specific examples to show that many regulatory interactions can be confirmed by previous studies or functional enrichment analysis. Through comparing statistical models, we conclude from the project that combining different models with over-representation analysis and prior structural information can improve the quality of prediction and facilitate biological interpretation. Keywords: regulatory network, variable selection, penalized maximum likelihood estimation, optimization, functional enrichment analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії