Dissertations / Theses on the topic 'Objectifs de parité'

To see the other types of publications on this topic, follow the link: Objectifs de parité.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Objectifs de parité.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sérée, Bastien. "Problèmes d'optimisation des les graphes paramétrés." Electronic Thesis or Diss., Ecole centrale de Nantes, 2022. http://www.theses.fr/2022ECDN0066.

Full text
Abstract:
Nous considérons des graphes orientés pondérés dont l’énergie est paramétrée. Nous proposons dans un premier temps un algorithme qui, étant donné un graphe et un de ses sommets, renvoie des arbres, chaque arbre représentant les plus courtschemins depuis la source vers tous les autres sommets du graphe pour une zone particulière de l’espace des paramètres. De plus l’union de ces zones couvre l’espace des paramètres. Nous considérons ensuite l’accessibilité dans les graphes à énergie multidimensionnelle, avec un type de contraintes plus absolues qui imposent que l’énergie reste entre des bornes. Nous montrons la décidabilité et la complexité du problème quel que soit le nombre de paramètres et de dimensions lorsque les paramètres prennent des valeurs entières. Nous montrons également l’indécidabilité de ce problème avec au moins un paramètre lorsque la dimension est supérieure ou égale à deux. Nous étudions enfin des jeux de parité à un et deux joueurs sur les graphes paramétrés dont l’objectif est la conjonction d’une condition qualitative sur la parité et d’une condition quantitative : l’énergiedoit rester positive. Nous montrons la décidabilité et prouvons des bornes de la complexité du problème de la recherche d’une stratégie gagnante dans les cas à un et à deux joueurs
We are considering weighted oriented graphs with parametrized energy. Firstly we propose an algorithm that, given a graph and one of its vertices, returns trees, every tree representing shortest-paths from the source to every other vertex for a particular zone of the parameter space. Moreover, union of these zones is a covering of the parameter space. Then we consider reachability in graphs with multi-dimensional energy, with stricter constraints that enforce the energy to stay between bounds. We prove decidabilty and complexity of this problem regardless of the dimension and the number of parameters when parameters take integer values. We alsoprove the undecidability of this problem when there is at least one parameter and the dimension is at least two. Finally we study paritygames on parametrized graphs with one and two players whose objective is the conjunction of a qualitative condition on the parity andquantitative one : energy must stay positive. We show the decidability and prove bounds on the complexity of the problem of searchinga winning strategy in both cases with one and two players
APA, Harvard, Vancouver, ISO, and other styles
2

Ismaïli, Anisse. "Algorithms for Nash-equilibria in Agent Networks and for Pareto-efficiency in State Space Search : Generalizations to Pareto-Nash in Multiple Objective Games." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066148.

Full text
Abstract:
Un agent est un élément qui décide une action. Par ce formalisme très général on peut aussi bien désigner deux enfants jouant à pierre-papier-ciseaux, des êtres humains choisissant des produits sur un marché, un logiciel de routage calculant un plus court chemin sur Internet pour transporter des informations sur des routes numériques encombrées, qu’une enchère combinatoire automatique pour vendre des liens commerciaux et rapportant des milliards à google. Les chercheurs en théorie de la décision algorithmique et en théorie des jeux algorithmique – des mathématiciens et informaticiens – aiment à penser que ces exemples concrets peuvent être modélisés au moyen de systèmes décisionnels rationnels, aussi complexe la réalité soit-elle. Les systèmes décisionnels modernes trouvent leur complexité dans plusieurs dimensions. D’une part, les préférences d’un agent peuvent être complexes à représenter avec de simples nombres réels, alors que de multiples objectifs conflictuels interviennent dans chaque décision. D’une autre part, les interactions entre agents font que les récompenses de chacun dépendent des actions de tous, rendant difficile la prédiction des actions individualistes résultantes. L’objet de cette thèse en théorie algorithmique des systèmes décisionnels interactifs (jeux) est de poursuivre des efforts de recherche menés sur ces deux sources de complexité, et in fine, de considérer les deux complexités dans un même modèle
An agent is an entity that decides an action. By using this abstraction, it is possible to model two children playing rock-paper-scissors, a software computing a shortest path on the internet for packet-routing on congest numerical networks, as well as an automatic combinatorial auction that sells commercial links in order to make google earn billions. The researchers in algorithmic decision theory and algorithmic game theory (mathematicians and computer scientists) like to think that these real-life examples can be modelled by mean of agents in an interaction decision system, no matter how complex is reality. The modern interactive decision systems find their complexity in multiple aspects. Firstly, the preferences of an agent can be complex to model with real numbers when there are multiple conflicting objectives resulting from every decision. Secondly, the interactions between agents are such that the payoff of every individual depends of the actions of all, making difficult the prediction of the resulting action-profile. This thesis aims at pursuing research efforts lead on these two sources of complexity, in order to consider ultimately both aspects in the same model
APA, Harvard, Vancouver, ISO, and other styles
3

Rohling, Gregory Allen. "Multiple Objective Evolutionary Algorithms for Independent, Computationally Expensive Objectives." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4835.

Full text
Abstract:
This research augments current Multiple Objective Evolutionary Algorithms with methods that dramatically reduce the time required to evolve toward a region of interest in objective space. Multiple Objective Evolutionary Algorithms (MOEAs) are superior to other optimization techniques when the search space is of high dimension and contains many local minima and maxima. Likewise, MOEAs are most interesting when applied to non-intuitive complex systems. But, these systems are often computationally expensive to calculate. When these systems require independent computations to evaluate each objective, the computational expense grows with each additional objective. This method has developed methods that reduces the time required for evolution by reducing the number of objective evaluations, while still evolving solutions that are Pareto optimal. To date, all other Multiple Objective Evolutionary Algorithms (MOEAs) require the evaluation of all objectives before a fitness value can be assigned to an individual. The original contributions of this thesis are: 1. Development of a hierarchical search space description that allows association of crossover and mutation settings with elements of the genotypic description. 2. Development of a method for parallel evaluation of individuals that removes the need for delays for synchronization. 3. Dynamical evolution of thresholds for objectives to allow partial evaluation of objectives for individuals. 4. Dynamic objective orderings to minimize the time required for unnecessary objective evaluations. 5. Application of MOEAs to the computationally expensive flare pattern design domain. 6. Application of MOEAs to the optimization of fielded missile warning receiver algorithms. 7. Development of a new method of using MOEAs for automatic design of pattern recognition systems.
APA, Harvard, Vancouver, ISO, and other styles
4

Teo, Jason T. W. Information Technology &amp Electrical Engineering Australian Defence Force Academy UNSW. "Pareto multi-objective evolution of legged embodied organisms." Awarded by:University of New South Wales - Australian Defence Force Academy. School of Information Technology and Electrical Engineering, 2003. http://handle.unsw.edu.au/1959.4/38682.

Full text
Abstract:
The automatic synthesis of embodied creatures through artificial evolution has become a key area of research in robotics, artificial life and the cognitive sciences. However, the research has mainly focused on genetic encodings and fitness functions. Considerably less has been said about the role of controllers and how they affect the evolution of morphologies and behaviors in artificial creatures. Furthermore, the evolutionary algorithms used to evolve the controllers and morphologies are pre-dominantly based on a single objective or a weighted combination of multiple objectives, and a large majority of the behaviors evolved are for wheeled or abstract artifacts. In this thesis, we present a systematic study of evolving artificial neural network (ANN) controllers for the legged locomotion of embodied organisms. A virtual but physically accurate world is used to simulate the evolution of locomotion behavior in a quadruped creature. An algorithm using a self-adaptive Pareto multi-objective evolutionary optimization approach is developed. The experiments are designed to address five research aims investigating: (1) the search space characteristics associated with four classes of ANNs with different connectivity types, (2) the effect of selection pressure from a self-adaptive Pareto approach on the nature of the locomotion behavior and capacity (VC-dimension) of the ANN controller generated, (3) the effciency of the proposed approach against more conventional methods of evolutionary optimization in terms of computational cost and quality of solutions, (4) a multi-objective approach towards the comparison of evolved creature complexities, (5) the impact of relaxing certain morphological constraints on evolving locomotion controllers. The results showed that: (1) the search space is highly heterogeneous with both rugged and smooth landscape regions, (2) pure reactive controllers not requiring any hidden layer transformations were able to produce sufficiently good legged locomotion, (3) the proposed approach yielded competitive locomotion controllers while requiring significantly less computational cost, (4) multi-objectivity provided a practical and mathematically-founded methodology for comparing the complexities of evolved creatures, (5) co-evolution of morphology and mind produced significantly different creature designs that were able to generate similarly good locomotion behaviors. These findings attest that a Pareto multi-objective paradigm can spawn highly beneficial robotics and virtual reality applications.
APA, Harvard, Vancouver, ISO, and other styles
5

Ribeiro, Marco Tulio Correia. "Multi-objective pareto-efficient algorithms for recommender systems." Universidade Federal de Minas Gerais, 2013. http://hdl.handle.net/1843/ESSA-9CHG5H.

Full text
Abstract:
Recommender systems are quickly becoming ubiquitous in applications such as ecommerce, social media channels and content providers, acting as enabling mechanisms designed to overcome the information overload problem by improving browsing and consumption experience. A typical task in recommender systems is to output a ranked list of items, so that items placed higher in the rank are more likely to be interesting to the users. Interestingness measures include how accurate, novel and diverse the suggested items are, and the objective is usually to produce ranked lists optimizing one of these measures. Suggesting items that are simultaneously accurate, novel and diverse is much more challenging, since this may lead to a conflicting-objective problem, in which the attempt to improve a measure further may result in worsening other measures. In this thesis we propose new approaches for multi-objective recommender systems based on the concept of Pareto-efficiency -- a state achieved when the system is devised in the most efficient manner in the sense that there is no way to improve one of the objectives without making any other objective worse off. Given that existing recommendation algorithms differ in their level of accuracy, diversity and novelty, we exploit the Pareto-efficiency concept in two distinct manners: (i) the aggregation of ranked lists produced by existing algorithms into a single one, which we call Paretoefficient ranking, and (ii) the weighted combination of existing algorithms resulting in a hybrid one, which we call Pareto-efficient hybridization. Our evaluation involves two real application scenarios: music recommendation with implicit feedback (i.e., Last.fm) and movie recommendation with explicit feedback (i.e., MovieLens). We show that the proposed approaches are effective in optimizing each of the metrics without hurting the others, or optimizing all three simultaneously. Further, for the Pareto-efficient hybridization, we allow for adjusting the compromise between the metrics, so that the recommendation emphasis can be set dinamically according to the needs of different users.
Sistemas de recomendação tem se tornado cada vez mais populares em aplicações como e-commerce, mídias sociais e provedores de conteúdo. Esses sistemas agem como mecanismos para lidar com o problema da sobrecarga de informação. Uma tarefa comum em sistemas de recomendação é a de ordenar um conjunto de itens, de forma que os itens no topo da lista sejam de interesse para os usuários. O conceito de interesse pode ser medido observando a acurácia, novidade e diversidade dos itens sugeridos. Geralmente, o objetivo de um sistema de recomendação é gerar listas ordenadas de forma a otimizar uma dessas métricas. Um problema mais difícil é tentar otimizar as três métricas (ou objetivos) simultaneamente, o que pode levar ao caso onde a tentativa de melhorar em uma das métricas pode piorar o resultado nas outras métricas. Neste trabalho, propomos novas abordagens para sistemas de recomendaççao multi-objetivo, baseadas no conceito de Eficiência de Pareto -- um estado obtido quando o sistema é de tal forma que não há como melhorar em algum objetivo sem piorar em outro objetivo. Dado que os algoritmos de recomendação existentes diferem em termos de acurácia, diversidade e novidade, exploramos o conceito de Eficiência de Pareto de duas formas distintas: (i) agregando listas ordenadas produzidas por algoritmos existentes de forma a obter uma lista única - abordagem que chamamos de ranking Pareto-eficiente, e (ii), a combinação linear ponderada de algoritmos existentes, resultado em um híbrido, abordagem que chamamos de hibridização Pareto-eficiente. Nossa avaliação envolve duas aplicações reais: recomendação de música com feedback implícito (i.e., Last.fm) e recomendação de filmes com feedback explícito (i.e., Movielens). Nós mostramos que as abordagens Pareto-eficientes são efetivas em recomendar items com bons niveis de acurácia, novidade e diversidade (simultaneamente), ou uma das métricas sem piorar as outras. Além disso, para a hibridização Pareto-eficiente, provemos uma forma de ajustar o compromisso entre acurácia, novidade e diversidade, de forma que a ênfase da recomendação possa ser ajustada dinamicamente para usuários diferentes.
APA, Harvard, Vancouver, ISO, and other styles
6

Nordström, Peter. "Multi-objective optimization and Pareto navigation for voyage planning." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-220338.

Full text
Abstract:
The shipping industry is very large and ships require a substantial amount of fuel. However, fuel consumption is not the only concern. Time of arrival, safety concerns, distance travelled etc. are also of importance and these objectives might be inherently conflicting. This thesis aims to demonstrate multi-objective optimization and Pareto navigation for application in voyage planning. In order to perform this optimization, models of weather, ocean conditions, ship dynamics and propulsion system are needed. Statistical methods for estimation of resistance experienced in calm and rough sea are used. An earlier developed framework is adopted to perform the optimization and Pareto navigation. The results show that it is a suitable approach in voyage planning. A strength of the interactive Pareto navigation is the overview of the solution space presented to the decision maker and the control of the spread of the objective space. Another benefit is the possibilities of assigning specific values on objectives and setting thresholds in order to narrow down the solution space. The numerical results reinforces the trend of slow steaming to decrease fuel consumption.
APA, Harvard, Vancouver, ISO, and other styles
7

Zeng, Rong-Qiang. "Métaheuristiques multi-objectif basées sur des voisinages pour l'approximation d'ensembles de pareto." Angers, 2012. http://www.theses.fr/2012ANGE0054.

Full text
Abstract:
L'intérêt porté sur l'optimisation multi-objectif n'a cessé de croitre ces vingt dernières années. L'objectif est de trouver l'ensemble des solutions Pareto, qui correspondent aux meilleurs compromis possibles sur un ensemble d'objectifs que l'on cherche à optimiser. En général, il n'est pas possible de calculer cet ensemble de manière exacte, c'est pourquoi de nombreuses méthodes heuristiques multi-objectif ont été proposées afin de trouver des approximations de cet ensemble. Cette thèse s'intéresse au développement de métaheuristiques pour l'optimisation multi-objectif de problèmes difficiles. Pour résoudre ces problèmes multi-objectif, nous proposons dans un premier l'algorithme HBMOLS, qui est une recherche locale multi-objectif itérée, où la qualité des solutions évaluée est calculée en fonction du reste de la population, selon un indicateur de qualité que nous définissons. Cet indicateur de qualité permet d'établir l'opérateur de sélection de notre algorithme. Ensuite, nous nous intéressons aux méthodes de path-relinking, et de leur intégration dans l'algorithme de recherche locale itérée proposée antérieurement. Nous proposons différentes versions de l'algorithme MOPR, que nous intégrons dans HBMOLS. Enfin, ces versions sont évaluées et comparées sur les deux problèmes cités précèdemment. Afin d'évaluer la qualité de nos approches, nous nous proposons de les évaluer sur deux problèmes multi-objectif difficiles et différents : un problème d'ordonnancement de type flow-shop et un problème d'assignement quadratique
Multi-objective optimization has received more and more attention in the late twenty years. The aim is to generate a Pareto optimal set, which keeps the best compromise among all the objectives. Since it is not possible to compute the Pareto optimal set in a reasonable time in most cases, many multi-objective metaheuristics have been established to approximate the Pareto optimal set. This thesis is devoted to developing metaheuristics to tackle multi-objective optimization problems in general. In order to solve these multi-objective optimization problems, we propose the Hypervolume-Based Multi-Objective Local Search algorithm (HBMOLS). This algorithm uses a hypervolume contribution indicator as the selection measure to compare and select solutions during the search process. Afterwards, we integrate path relinking techniques into the HBMOLS algorithm as a function which initializes new populations for HBMOLS. Then, we present and evaluate different versions of multi-objective hybrid path linking algorithm. To evaluate the efficiency and the generality of our approaches, we carry out experiments on a multi-objective flow shop problem and a multi-objective quadratic assignment problem
APA, Harvard, Vancouver, ISO, and other styles
8

Zhong, Hongliang. "Bandit feedback in Classification and Multi-objective Optimization." Thesis, Ecole centrale de Marseille, 2016. http://www.theses.fr/2016ECDM0004/document.

Full text
Abstract:
Des problèmes de Bandit constituent une séquence d’allocation dynamique. D’une part, l’agent de système doit explorer son environnement ( à savoir des bras de machine) pour recueillir des informations; d’autre part, il doit exploiter les informations collectées pour augmenter la récompense. Comment d’équilibrer adéquatement la phase d’exploration et la phase d’exploitation, c’est une obscurité des problèmes de Bandit, et la plupart des chercheurs se concentrent des efforts sur les stratégies d’équilibration entre l’exploration et l’exploitation. Dans cette dissertation, nous nous concentrons sur l’étude de deux problèmes spécifiques de Bandit: les problèmes de Bandit contextuel et les problèmes de Bandit Multi- objectives. Cette dissertation propose deux aspects de contributions. La première concerne la classification sous la surveillance partielle, laquelle nous codons comme le problème de Bandit contextuel avec des informations partielles. Ce type des problèmes est abondamment étudié par des chercheurs, en appliquant aux réseaux sociaux ou systèmes de recommandation. Nous proposons une série d’algorithmes sur la base d’algorithme Passive-Aggressive pour résoudre des problèmes de Bandit contextuel. Nous profitons de sa fondations, et montrons que nos algorithmes sont plus simples à mettre en œuvre que les algorithmes en état de l’art. Ils réalisent des biens performances de classification. Pour des problèmes de Bandit Multi-objective (MOMAB), nous proposons une méthode motivée efficace et théoriquement à identifier le front de Pareto entre des bras. En particulier, nous montrons que nous pouvons trouver tous les éléments du front de Pareto avec un budget minimal dans le cadre de PAC borne
Bandit problems constitute a sequential dynamic allocation problem. The pulling agent has to explore its environment (i.e. the arms) to gather information on the one hand, and it has to exploit the collected clues to increase its rewards on the other hand. How to adequately balance the exploration phase and the exploitation phase is the crux of bandit problems and most of the efforts devoted by the research community from this fields has focused on finding the right exploitation/exploration tradeoff. In this dissertation, we focus on investigating two specific bandit problems: the contextual bandit problems and the multi-objective bandit problems. This dissertation provides two contributions. The first contribution is about the classification under partial supervision, which we encode as a contextual bandit problem with side informa- tion. This kind of problem is heavily studied by researchers working on social networks and recommendation systems. We provide a series of algorithms to solve the Bandit feedback problem that pertain to the Passive-Aggressive family of algorithms. We take advantage of its grounded foundations and we are able to show that our algorithms are much simpler to implement than state-of-the-art algorithms for bandit with partial feedback, and they yet achieve better perfor- mances of classification. For multi-objective multi-armed bandit problem (MOMAB), we propose an effective and theoretically motivated method to identify the Pareto front of arms. We in particular show that we can find all elements of the Pareto front with a minimal budget
APA, Harvard, Vancouver, ISO, and other styles
9

Cvetkovic, Dragan. "Evolutionary multi-objective decision support systems for conceptual design." Thesis, University of Plymouth, 2000. http://hdl.handle.net/10026.1/2328.

Full text
Abstract:
In this thesis the problem of conceptual engineering design and the possible use of adaptive search techniques and other machine based methods therein are explored. For the multi-objective optimisation (MOO) within conceptual design problem, genetic algorithms (GA) adapted to MOO are used and various techniques explored: weighted sums, lexicographic order, Pareto method with and without ranking, VEGA-like approaches etc. Large number of runs are performed for findingZ Dth e optimal configuration and setting of the GA parameters. A novel method, weighted Pareto method is introduced and applied to a real-world optimisation problem. Decision support methods within conceptual engineering design framework are discussed and a new preference method developed. The preference method for translating vague qualitative categories (such as "more important 91 , 4m.9u ch less important' 'etc. ) into quantitative values (numbers) is based on fuzzy preferences and graph theory methods. Several applications of preferences are presented and discussed: * in weighted sum based optimisation methods; s in weighted Pareto method; * for ordering and manipulating constraints and scenarios; e for a co-evolutionary, distributive GA-based MOO method; The issue of complexity and sensitivity is addressed as well as potential generalisations of presented preference methods. Interactive dynamical constraints in the form of design scenarios are introduced. These are based on a propositional logic and a fairly rich mathematical language. They can be added, deleted and modified on-line during the design session without need for recompiling the code. The use of machine-based agents in conceptual design process is investigated. They are classified into several different categories (e. g. interface agents, search agents, information agents). Several different categories of agents performing various specialised task are developed (mostly dealing with preferences, but also some filtering ones). They are integrated with the conceptual engineering design system to form a closed loop system that includes both computer and designer. All thesed ifferent aspectso f conceptuale ngineeringd esigna re applied within Plymouth Engineering Design Centre / British Aerospace conceptual airframe design project.
APA, Harvard, Vancouver, ISO, and other styles
10

Hartley, A. C. "The theory of parity non-conservation in atoms." Thesis, University of Oxford, 1989. https://ora.ox.ac.uk/objects/uuid:24d852a0-0f6f-4fea-8b41-db6a0b02b7c4.

Full text
Abstract:
In this thesis we are concerned with calculating the parity non-conserving (PNC) E1 transition matrix elements for the caesium 6s12 → 7s12 transition and the thallium 6p12 → 7p12 and 6p12 → 6p32 transitions using the Equations of Motion (EOM) formalism. We add the weak interaction Hamiltonian to derive the PNC EOM. Taking a subset of the EOM terms gives the Random Phase Approximation equations and their PNC counterpart. Solving these, we obtain good agreement with similar calculations by other groups. In order to estimate correlation effects we use a parameterised semi-empirical potential which is adjusted to give the best fit with experimental data for the valence and excited state energy levels. Including the potential in the EOM and PNC EOM gives us very accurate values for the parity conserving transitions. For the PNC transitions we obtain

Caesium6s12 → 7s120.895(1±0.03) × 10-11 (-iea0QW/N)
Thallium6p12 → 7p12-7.85(1±0.05) × 10-11 (-iea0QW/N)
Thallium6p12 → 6p32-28.4(1±0.07) × 10-11 (-iea0QW/N)

These are in very good agreement with the most extensive Many-Body Perturbation Theory calculations performed. Using our value for the caesium transition matrix element and the latest experimental results gives a value of QW = ~ 71.8 ± 1.8 ± 2.1 where the first error is experimental and the second is theoretical. This corresponds to a value of the standard model sin2ΘW = 0.230 ± 0.009 which is to be compared with the current world average value of 0.230 ± 0.005. We investigate the single particle EOM terms that were not included in the above calculation and find that they are concerned with the Exclusion Principle violating terms that are implicitly included in an RPA calculation. Other terms represent the valence contribution to certain two particle effects. Since the main two particle terms have not been included however, these correction terms do not lead to a significant increase in the accuracy of the calculation.

APA, Harvard, Vancouver, ISO, and other styles
11

Jamain, Florian. "Représentations discrètes de l'ensemble des points non dominés pour des problèmes d'optimisation multi-objectifs." Phd thesis, Université Paris Dauphine - Paris IX, 2014. http://tel.archives-ouvertes.fr/tel-01070041.

Full text
Abstract:
Le but de cette thèse est de proposer des méthodes générales afin de contourner l'intractabilité de problèmes d'optimisation multi-objectifs.Dans un premier temps, nous essayons d'apprécier la portée de cette intractabilité en déterminant une borne supérieure, facilement calculable, sur le nombre de points non dominés, connaissant le nombre de valeurs prises par chaque critère.Nous nous attachons ensuite à produire des représentations discrètes et tractables de l'ensemble des points non dominés de toute instance de problèmes d'optimisation multi-objectifs. Ces représentations doivent satisfaire des conditions de couverture, i.e. fournir une bonne approximation, de cardinalité, i.e. ne pas contenir trop de points, et si possible de stabilité, i.e. ne pas contenir de redondances. En s'inspirant de travaux visant à produire des ensembles ε-Pareto de petite taille, nous proposons tout d'abord une extension directe de ces travaux, puis nous axons notre recherche sur des ensembles ε-Pareto satisfaisant une condition supplémentaire de stabilité. Formellement, nous considérons des ensembles ε-Pareto particuliers, appelés (ε, ε′)-noyaux, qui satisfont une propriété de stabilité liée à ε′. Nous établissons des résultats généraux sur les (ε, ε′)-noyaux puis nous proposons des algorithmes polynomiaux qui produisent des (ε, ε′)-noyaux de petite taille pour le cas bi-objectif et nous donnons des résultats négatifs pour plus de deux objectifs.
APA, Harvard, Vancouver, ISO, and other styles
12

Oujebbour, Fatima Zahra. "Méthodes et applications industrielles en optimisation multi-critère de paramètres de processus et de forme en emboutissage." Phd thesis, Université Nice Sophia Antipolis, 2014. http://tel.archives-ouvertes.fr/tel-00976639.

Full text
Abstract:
Face aux exigences concurrentielles et économiques actuelles dans le secteur automobile, l'emboutissage a l'avantage, comme étant un procédé de mise en forme par grande déformation, de produire, en grandes cadences, des pièces de meilleure qualité géométrique par rapport aux autres procédés de fabrication mécanique. Cependant, il présente des difficultés de mise en œuvre, cette dernière s'effectue généralement dans les entreprises par la méthode classique d'essai-erreur, une méthode longue et très coûteuse. Dans la recherche, le recours à la simulation du procédé par la méthode des éléments finis est une alternative. Elle est actuellement une des innovations technologiques qui cherche à réduire le coût de production et de réalisation des outillages et facilite l'analyse et la résolution des problèmes liés au procédé. Dans le cadre de cette thèse, l'objectif est de prédire et de prévenir, particulièrement, le retour élastique et la rupture. Ces deux problèmes sont les plus répandus en emboutissage et présentent une difficulté en optimisation puisqu'ils sont antagonistes. Une pièce mise en forme par emboutissage à l'aide d'un poinçon sous forme de croix a fait l'objet de l'étude. Nous avons envisagé, d'abord, d'analyser la sensibilité des deux phénomènes concernés par rapport à deux paramètres caractéristiques du procédé d'emboutissage (l'épaisseur du flan initial et de la vitesse du poinçon), puis par rapport à quatre (l'épaisseur du flan initial, de la vitesse du poinçon, l'effort du serre flan et le coefficient du frottement) et finalement par rapport à la forme du contour du flan. Le recours à des méta-modèles pour optimiser les deux critères était nécessaire.
APA, Harvard, Vancouver, ISO, and other styles
13

Cáceres, Sepúlveda Geraldine. "Relevance of Multi-Objective Optimization in the Chemical Engineering Field." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39783.

Full text
Abstract:
The first objective of this research project is to carry out multi-objective optimization (MOO) for four simple chemical engineering processes to clearly demonstrate the wealth of information on a given process that can be obtained from the MOO instead of a single aggregate objective function. The four optimization case studies are the design of a PI controller, an SO2 to SO3 reactor, a distillation column and an acrolein reactor. Results that were obtained from these optimization case studies show the benefit of generating and using the Pareto domain to gain a deeper understanding of the underlying relationships between the various process variables and the different performance objectives. In addition, an acrylic acid production plant model is developed in order to propose a methodology to solve multi-objective optimization for the two-reactor system model using artificial neural networks (ANNs) as metamodels, in an effort to reduce the computational time requirement that is usually very high when first-principles models are employed to approximate the Pareto domain. Once the metamodel was trained, the Pareto domain was circumscribed using a genetic algorithm and ranked with the Net Flow method (NFM). After the MOO was carry out with the ANN surrogate model, the optimization time was reduced by a factor of 15.5.
APA, Harvard, Vancouver, ISO, and other styles
14

Amouzgar, Kaveh. "Multi-objective optimization using Genetic Algorithms." Thesis, Högskolan i Jönköping, Tekniska Högskolan, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-19851.

Full text
Abstract:
In this thesis, the basic principles and concepts of single and multi-objective Genetic Algorithms (GA) are reviewed. Two algorithms, one for single objective and the other for multi-objective problems, which are believed to be more efficient are described in details. The algorithms are coded with MATLAB and applied on several test functions. The results are compared with the existing solutions in literatures and shows promising results. Obtained pareto-fronts are exactly similar to the true pareto-fronts with a good spread of solution throughout the optimal region. Constraint handling techniques are studied and applied in the two algorithms. Constrained benchmarks are optimized and the outcomes show the ability of algorithm in maintaining solutions in the entire pareto-optimal region. In the end, a hybrid method based on the combination of the two algorithms is introduced and the performance is discussed. It is concluded that no significant strength is observed within the approach and more research is required on this topic. For further investigation on the performance of the proposed techniques, implementation on real-world engineering applications are recommended.
APA, Harvard, Vancouver, ISO, and other styles
15

Amouzgar, Kaveh. "Metamodel based multi-objective optimization." Licentiate thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH. Forskningsmiljö Produktutveckling - Simulering och optimering, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-28432.

Full text
Abstract:
As a result of the increase in accessibility of computational resources and the increase in the power of the computers during the last two decades, designers are able to create computer models to simulate the behavior of a complex products. To address global competitiveness, companies are forced to optimize their designs and products. Optimizing the design needs several runs of computationally expensive simulation models. Therefore, using metamodels as an efficient and sufficiently accurate approximate of the simulation model is necessary. Radial basis functions (RBF) is one of the several metamodeling methods that can be found in the literature. The established approach is to add a bias to RBF in order to obtain a robust performance. The a posteriori bias is considered to be unknown at the beginning and it is defined by imposing extra orthogonality constraints. In this thesis, a new approach in constructing RBF with the bias to be set a priori by using the normal equation is proposed. The performance of the suggested approach is compared to the classic RBF with a posteriori bias. Another comprehensive comparison study by including several modeling criteria, such as problem dimension, sampling technique and size of samples is conducted. The studies demonstrate that the suggested approach with a priori bias is in general as good as the performance of RBF with a posteriori bias. Using the a priori RBF, it is clear that the global response is modeled with the bias and that the details are captured with radial basis functions. Multi-objective optimization and the approaches used in solving such problems are briefly described in this thesis. One of the methods that proved to be efficient in solving multi-objective optimization problems (MOOP) is the strength Pareto evolutionary algorithm (SPEA2). Multi-objective optimization of a disc brake system of a heavy truck by using SPEA2 and RBF with a priori bias is performed. As a result, the possibility to reduce the weight of the system without extensive compromise in other objectives is found. Multi-objective optimization of material model parameters of an adhesive layer with the aim of improving the results of a previous study is implemented. The result of the original study is improved and a clear insight into the nature of the problem is revealed.
APA, Harvard, Vancouver, ISO, and other styles
16

Götz, Sebastian, Thomas Kühn, Christian Piechnick, Georg Püschel, and Uwe Aßmann. "A Models@run.time Approach for Multi-objective Self-optimizing Software." Springer, 2014. https://tud.qucosa.de/id/qucosa%3A75372.

Full text
Abstract:
This paper presents an approach to operate multi-objective self-optimizing software systems based on the models@run.time paradigm. In contrast to existing approaches, which are usually specific to a single or selected set of objectives (e.g., performance and/or reliability), the presented approach is generic in that it allows the software architect to model the relevant concerns of interest to self-optimization. At runtime, these models are interpreted and used to generate optimization problems. To evaluate the applicability of the approach, a scalability analysis is provided, showing the approach’s feasibility for at least two objectives.
APA, Harvard, Vancouver, ISO, and other styles
17

Bahri, Oumayma. "A fuzzy framework for multi-objective optimization under uncertainty." Thesis, Lille 1, 2017. http://www.theses.fr/2017LIL10030/document.

Full text
Abstract:
Cette thèse est consacrée à l'étude de l’optimisation combinatoire multi-objective sous incertitudes. Plus particulièrement, nous abordons les problèmes multi-objectifs contenant des données floues qui sont exprimées par des nombres triangulaires floues. Pour faire face à ce type de problèmes, notre idée principale est d’étendre les concepts multi-objectifs classiques au contexte flou. Nous proposons, dans un premier temps, une nouvelle approche Pareto entre des objectifs flous (i.e. vecteurs des nombres triangulaires flous). Ensuite, nous étendons des méta-heuristiques basées sur Pareto afin de converger vers des solutions optimales floues. L’approche proposée est illustrée sur un problème bi-objectif de routage de véhicules avec des demandes floues. Dans le deuxième volet de ce travail, nous abordons l’aspect de robustesse dans le contexte multi-objectif flou en proposant une nouvelle méthodologie d’évaluation de robustesse des solutions. Finalement, les résultats expérimentaux sur des benchmarks flous du problème de routage de véhicules prouvent l’efficacité et la fiabilité de notre approche
This thesis is devoted to the study of multi-objective combinatorial optimization under uncertainty. In particular, we address multi-objective problems with fuzzy data, in which fuzziness is expressed by fuzzy triangular numbers. To handle such problems, our main idea is to extend the classical multi-objective concepts to fuzzy context. To handle such problems, we proposed a new Pareto approach between fuzzy-valued objectives (i.e. vectors of triangular fuzzy numbers). Then, an extension of Pareto-based metaheuristics is suggested as resolution methods. The proposed approach is thereafter illustrated on a bi-objective vehicle routing problem with fuzzy demands. At the second stage, we address robustness aspect in the multi-objective fuzzy context by proposing a new methodology of robustness evaluation of solutions. Finally, the experimental results on fuzzy benchmarks of vehicle routing problem prove the effectiveness and reliability of our approach
APA, Harvard, Vancouver, ISO, and other styles
18

Lemesre, Julien. "Méthodes exactes pour l'optimisation combinatoire multi-objectif : conception et application." Lille 1, 2006. https://pepite-depot.univ-lille.fr/RESTREINT/Th_Num/2006/50376_2006_278.pdf.

Full text
Abstract:
Cette thèse s'inscrit dans le domaine de l'optimisation combinatoire multi-objectif. Elle porte, plus particulièrement, sur les méthodes de résolution exacte trouvant l'intégralité du front Pareto. Pour tester et comparer nos méthodes, nous utilisons un problème de flow-shop multiobjectif (problème d 'ordonnancement). Nous présentons différentes méthodes exactes de la littérature et analysons leurs périmètres d'utilisation efficace. Afin de résoudre le problème de flow-shop bi-objectif, nous proposons en premier lieu une application de la méthode deux phases optimisée en fonction des spécificités de notre problème. Ensuite, nous proposons une nouvelle méthode exacte de résolution des problèmes bi-objectif (la méthode parallèle par partitions - PPM - Parallel Partitioning Method). Nous présentons une extension de cette méthode vers une méthode exacte multi-objectif générale (admettant plus de deux objectifs) et son application à un problème de flow-shop tri-objectif. Les méthodes proposées étant exactes, elles demandent un temps de calcul important. Dans un dernier temps, nous étudions deux moyens de réduire les temps de calcul afin d'obtenir le front Pareto exact : le parallélisme et l'hybridation avec une méthode heuristique. Afin d'ouvrir le sujet de thèse, nous présentons aussi une hybridation entre une méthode exacte et une méta-heuristique retournant un résultat heuristique. Ceci nous montre une des utilisations possibles des méthodes exactes sur les problèmes de grandes tailles.
APA, Harvard, Vancouver, ISO, and other styles
19

Yoshikawa, Tomohiro, Daisuke Yamashiro, and Takeshi Furuhashi. "A Proposal of Visualization of Multi-Objective Pareto Solutions -Development of Mining Technique for Solutions-." IEEE, 2007. http://hdl.handle.net/2237/9482.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Ireland, David John. "Dielectric Antennas and Their Realisation Using a Pareto Dominance Multi-Objective Particle Swarm Optimisation Algorithm." Thesis, Griffith University, 2010. http://hdl.handle.net/10072/365312.

Full text
Abstract:
Antennas utilising a dielectric medium are technologies that have become popular in modern wireless platforms. They offer several desirable features such as high efficiency, electrically small and resistance to proximity detuning. Being a volumetric radiator however, realising a final, commercially competitive solution, often requires the use of a computational optimisation algorithm. In the realm of antenna design the practice of optimisation typically involves an automated routine consisting of a heuristic algorithm and a forward solving engine such as the finite element method (FEM) or finite difference time domain (FDTD) method. The solving engine is used to derive a post-processed performance value typically referred to as an objective or fitness function, while the heuristic method uses the objective function data to determine the next trial solution or solutions that approach a design goal. Nowadays, commercially viable antenna platforms are not characterised by a single performance value, but rather, a series of objective functions that are often inherently conflicting. Thus, an increase in one objective function results in a decrease in another. The optimisation algorithm is therefore required to seek a solution dictated by the preferences of the designer. Classical literature dominantly featured preference articulation, a priori, where the set of objectives are transformed into a scalar using a predefined preference arrangement. Contemporary theory implements the articulation a posteriori, where the complete set of compromise solutions are sought by the optimisation algorithm. It is hypothesised that modern multi-objective optimisation (MOO) theory, using a posteriori preference articulation, can be more useful for contemporary antenna design. By treating the objectives as individual dimensions in a mathematical space, it allows for independent, simultaneous optimisation. At the time of writing this dissertation, all commercial simulation software that include an optimisation algorithm use a predefined preference to the performance criteria. Thus, where a large set of equally potential solutions exist, only one final solution is delivered. This thesis examines two novel dielectric antenna technologies and uses modern MOO theory to obtain new solutions that supersede their prototypes. Taking a commercial perspective by optimising the electromagnetic performance and the physical size of the antenna simultaneously, it is hypothesised this allows an unprecedented insight into the inherent tradeoffs of practical antenna configurations.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Griffith School of Engineering
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
21

Champion, Heather. "Beam angle and fluence map optimization for PARETO multi-objective intensity modulated radiation therapy treatment planning." Medical Physics, 2011. http://hdl.handle.net/1993/8910.

Full text
Abstract:
In this work we introduce PARETO, a multiobjective optimization tool that simultaneously optimizes beam angles and fluence patterns in intensity-modulated radiation therapy (IMRT) treatment planning using a powerful genetic algorithm. We also investigate various objective functions and compare several parameterizations for modeling beam fluence in terms of fluence map complexity, solution quality, and run efficiency. We have found that the combination of a conformity-based Planning Target Volume (PTV) objective function and a dose-volume histogram or equivalent uniform dose -based objective function for Organs-At-Risk (OARs) produced relatively uniform and conformal PTV doses, with well-spaced beams. For two patient data sets, the linear gradient and beam group fluence parameterizations produced superior solution quality using a moderate and high degree of modulation, respectively, and had comparable run times. PARETO promises to improve the accuracy and efficiency of treatment planning by fully automating the optimization and producing a database of non-dominated solutions for each patient.
APA, Harvard, Vancouver, ISO, and other styles
22

Vandervoort, Allan. "New Multi-Objective Optimization Techniques and Their Application to Complex Chemical Engineering Problems." Thesis, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/19785.

Full text
Abstract:
In this study, two new Multi-Objective Optimization (MOO) techniques are developed. The two new techniques, the Objective-Based Gradient Algorithm (OBGA) and the Principal Component Grid Algorithm (PCGA), were developed with the goals of improving the accuracy and efficiency of the Pareto domain approximation relative to current MOO techniques. Both methods were compared to current MOO techniques using several test problems. It was found that both the OBGA and PCGA systematically produced a more accurate Pareto domain than current MOO techniques used for comparison, for all problems studied. The OBGA requires less computation time than the current MOO methods for relatively simple problems whereas for more complex objective functions, the computation time was larger. On the other hand, the efficiency of the PCGA was higher than the current MOO techniques for all problems tested. The new techniques were also applied to complex chemical engineering problems. The OBGA was applied to an industrial reactor producing ethylene oxide from ethylene. The optimization varied four of the reactor input parameters, and the selectivity, productivity and a safety factor related to the presence of oxygen in the reactor were maximized. From the optimization results, recommendations were made based on the ideal reactor operating conditions, and the control of key reactor parameters. The PCGA was applied to a PI controller model to develop new tuning methods based on the Pareto domain. The developed controller tuning methods were compared to several previously developed controller correlations. It was found that all previously developed controller correlations showed equal or worse performance than that based on the Pareto domain. The tuning methods were applied to a fourth order process and a process with a disturbance, and demonstrated excellent performance.
APA, Harvard, Vancouver, ISO, and other styles
23

Good, Nathan Andrew. "Multi-Objective Design Optimization Considering Uncertainty in a Multi-Disciplinary Ship Synthesis Model." Thesis, Virginia Tech, 2006. http://hdl.handle.net/10919/34532.

Full text
Abstract:
Multi-disciplinary ship synthesis models and multi-objective optimization techniques are increasingly being used in ship design. Multi-disciplinary models allow designers to break away from the traditional design spiral approach and focus on searching the design space for the best overall design instead of the best discipline-specific design. Complex design problems such as these often have high levels of uncertainty associated with them, and since most optimization algorithms tend to push solutions to constraint boundaries, the calculated "best" solution might be infeasible if there are minor uncertainties related to the model or problem definition. Consequently, there is a need to address uncertainty in optimization problems to produce effective and reliable results. This thesis focuses on adding a third objective, uncertainty, to the effectiveness and cost objectives already present in a multi-disciplinary ship synthesis model. Uncertainty is quantified using a "confidence of success" (CoS) calculation based on the mean value method. CoS is the probability that a design will satisfy all constraints and meet performance objectives. This work proves that the CoS concept can be applied to synthesis models to estimate uncertainty early in the design process. Multiple sources of uncertainty are realistically quantified and represented in the model in order to investigate their relative importance to the overall uncertainty. This work also presents methods to encourage a uniform distribution of points across the Pareto front. With a well defined front, designs can be selected and refined using a gradient based optimization algorithm to optimize a single objective while holding the others fixed.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
24

Hou, Ruizhe. "Optimal Latin Hypercube Designs for Computer Experiments Based on Multiple Objectives." Scholar Commons, 2018. http://scholarcommons.usf.edu/etd/7169.

Full text
Abstract:
Latin hypercube designs (LHDs) have broad applications in constructing computer experiments and sampling for Monte-Carlo integration due to its nice property of having projections evenly distributed on the univariate distribution of each input variable. The LHDs have been combined with some commonly used computer experimental design criteria to achieve enhanced design performance. For example, the Maximin-LHDs were developed to improve its space-filling property in the full dimension of all input variables. The MaxPro-LHDs were proposed in recent years to obtain nicer projections in any subspace of input variables. This thesis integrates both space-filling and projection characteristics for LHDs and develops new algorithms for constructing optimal LHDs that achieve nice properties on both criteria based on using the Pareto front optimization approach. The new LHDs are evaluated through case studies and compared with traditional methods to demonstrate their improved performance.
APA, Harvard, Vancouver, ISO, and other styles
25

Kamali, Aslan. "Developing a Decision Making Approach for District Cooling Systems Design using Multi-objective Optimization." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-208228.

Full text
Abstract:
Energy consumption rates have been dramatically increasing on a global scale within the last few decades. A significant role in this increase is subjected by the recent high temperature levels especially at summer time which caused a rapid increase in the air conditioning demands. Such phenomena can be clearly observed in developing countries, especially those in hot climate regions, where people depend mainly on conventional air conditioning systems. These systems often show poor performance and thus negatively impact the environment which in turn contributes to global warming phenomena. In recent years, the demand for urban or district cooling technologies and networks has been increasing significantly as an alternative to conventional systems due to their higher efficiency and improved ecological impact. However, to obtain an efficient design for district cooling systems is a complex task that requires considering a wide range of cooling technologies, various network layout configuration possibilities, and several energy resources to be integrated. Thus, critical decisions have to be made regarding a variety of opportunities, options and technologies. The main objective of this thesis is to develop a tool to obtain preliminary design configurations and operation patterns for district cooling energy systems by performing roughly detailed optimizations and further, to introduce a decision-making approach to help decision makers in evaluating the economic aspects and environmental performance of urban cooling systems at an early design stage. Different aspects of the subject have been investigated in the literature by several researchers. A brief survey of the state of the art was carried out and revealed that mathematical programming models were the most common and successful technique for configuring and designing cooling systems for urban areas. As an outcome of the survey, multi objective optimization models were decided to be utilized to support the decision-making process. Hence, a multi objective optimization model has been developed to address the complicated issue of decision-making when designing a cooling system for an urban area or district. The model aims to optimize several elements of a cooling system such as: cooling network, cooling technologies, capacity and location of system equipment. In addition, various energy resources have been taken into consideration as well as different solar technologies such as: trough solar concentrators, vacuum solar collectors and PV panels. The model was developed based on the mixed integer linear programming method (MILP) and implemented using GAMS language. Two case studies were investigated using the developed model. The first case study consists of seven buildings representing a residential district while the second case study was a university campus district dominated by non-residential buildings. The study was carried out for several groups of scenarios investigating certain design parameters and operation conditions such as: Available area, production plant location, cold storage location constraints, piping prices, investment cost, constant and variable electricity tariffs, solar energy integration policy, waste heat availability, load shifting strategies, and the effect of outdoor temperature in hot regions on the district cooling system performance. The investigation consisted of three stages, with total annual cost and CO2 emissions being the first and second single objective optimization stages. The third stage was a multi objective optimization combining the earlier two single objectives. Later on, non-dominated solutions, i.e. Pareto solutions, were generated by obtaining several multi objective optimization scenarios based on the decision-makers’ preferences. Eventually, a decision-making approach was developed to help decision-makers in selecting a specific solution that best fits the designers’ or decision makers’ desires, based on the difference between the Utopia and Nadir values, i.e. total annual cost and CO2 emissions obtained at the single optimization stages
Die Energieverbrauchsraten haben in den letzten Jahrzehnten auf globaler Ebene dramatisch zugenommen. Diese Erhöhung ist zu einem großen Teil in den jüngst hohen Temperaturniveaus, vor allem in der Sommerzeit, begründet, die einen starken Anstieg der Nachfrage nach Klimaanlagen verursachen. Solche Ereignisse sind deutlich in Entwicklungsländern zu beobachten, vor allem in heißen Klimaregionen, wo Menschen vor allem konventionelle Klimaanlagensysteme benutzen. Diese Systeme verfügen meist über eine ineffiziente Leistungsfähigkeit und wirken sich somit negativ auf die Umwelt aus, was wiederum zur globalen Erwärmung beiträgt. In den letzten Jahren ist die Nachfrage nach Stadt- oder Fernkältetechnologien und -Netzwerken als Alternative zu konventionellen Systemen aufgrund ihrer höheren Effizienz und besseren ökologischen Verträglichkeit satrk gestiegen. Ein effizientes Design für Fernkühlsysteme zu erhalten, ist allerdings eine komplexe Aufgabe, die die Integration einer breite Palette von Kühltechnologien, verschiedener Konfigurationsmöglichkeiten von Netzwerk-Layouts und unterschiedlicher Energiequellen erfordert. Hierfür ist das Treffen kritischer Entscheidungen hinsichtlich einer Vielzahl von Möglichkeiten, Optionen und Technologien unabdingbar. Das Hauptziel dieser Arbeit ist es, ein Werkzeug zu entwickeln, das vorläufige Design-Konfigurationen und Betriebsmuster für Fernkälteenergiesysteme liefert, indem aureichend detaillierte Optimierungen durchgeführt werden. Zudem soll auch ein Ansatz zur Entscheidungsfindung vorgestellt werden, der Entscheidungsträger in einem frühen Planungsstadium bei der Bewertung städtischer Kühlungssysteme hinsichtlich der wirtschaftlichen Aspekte und Umweltleistung unterstützen soll. Unterschiedliche Aspekte dieser Problemstellung wurden in der Literatur von verschiedenen Forschern untersucht. Eine kurze Analyse des derzeitigen Stands der Technik ergab, dass mathematische Programmiermodelle die am weitesten verbreitete und erfolgreichste Methode für die Konfiguration und Gestaltung von Kühlsystemen für städtische Gebiete sind. Ein weiteres Ergebnis der Analyse war die Festlegung von Mehrzieloptimierungs-Modelles für die Unterstützung des Entscheidungsprozesses. Darauf basierend wurde im Rahmen der vorliegenden Arbeit ein Mehrzieloptimierungs-Modell für die Lösung des komplexen Entscheidungsfindungsprozesses bei der Gestaltung eines Kühlsystems für ein Stadtgebiet oder einen Bezirk entwickelt. Das Modell zielt darauf ab, mehrere Elemente des Kühlsystems zu optimieren, wie beispielsweise Kühlnetzwerke, Kühltechnologien sowie Kapazität und Lage der Systemtechnik. Zusätzlich werden verschiedene Energiequellen, auch solare wie Solarkonzentratoren, Vakuum-Solarkollektoren und PV-Module, berücksichtigt. Das Modell wurde auf Basis der gemischt-ganzzahlig linearen Optimierung (MILP) entwickelt und in GAMS Sprache implementiert. Zwei Fallstudien wurden mit dem entwickelten Modell untersucht. Die erste Fallstudie besteht aus sieben Gebäuden, die ein Wohnviertel darstellen, während die zweite Fallstudie einen Universitätscampus dominiert von Nichtwohngebäuden repräsentiert. Die Untersuchung wurde für mehrere Gruppen von Szenarien durchgeführt, wobei bestimmte Designparameter und Betriebsbedingungen überprüft werden, wie zum Beispiel die zur Verfügung stehende Fläche, Lage der Kühlanlage, örtliche Restriktionen der Kältespeicherung, Rohrpreise, Investitionskosten, konstante und variable Stromtarife, Strategie zur Einbindung der Solarenergie, Verfügbarkeit von Abwärme, Strategien der Lastenverschiebung, und die Wirkung der Außentemperatur in heißen Regionen auf die Leistung des Kühlsystems. Die Untersuchung bestand aus drei Stufen, wobei die jährlichen Gesamtkosten und die CO2-Emissionen die erste und zweite Einzelzieloptimierungsstufe darstellen. Die dritte Stufe war ein Pareto-Optimierung, die die beiden ersten Ziele kombiniert. Im Anschluss wurden nicht-dominante Lösungen, also Pareto-Lösungen, erzeugt, indem mehrere Pareto-Optimierungs-Szenarien basierend auf den Präferenzen der Entscheidungsträger abgebildet wurden. Schließlich wurde ein Ansatz zur Entscheidungsfindung entwickelt, um Entscheidungsträger bei der Auswahl einer bestimmten Lösung zu unterstützen, die am besten den Präferenzen des Planers oder des Entscheidungsträgers enstpricht, basierend auf der Differenz der Utopia und Nadir Werte, d.h. der jährlichen Gesamtkosten und CO2-Emissionen, die Ergebnis der einzelnen Optimierungsstufen sind
APA, Harvard, Vancouver, ISO, and other styles
26

Adam, Zaeinulabddin Mohamed Ahmed. "Development and Applications of Multi-Objectives Signal Control Strategy during Oversaturated Conditions." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/28739.

Full text
Abstract:
Managing traffic during oversaturated conditions is a current challenge for practitioners due to the lack of adequate tools that can handle such situations. Unlike under-saturated conditions, operation of traffic signal systems during congestion requires careful consideration and analysis of the underlying causes of the congestion before developing mitigation strategies. The objectives of this research are to provide a practical guidance for practitioners to identify oversaturated scenarios and to develop a multi-objective methodology for selecting and evaluating mitigation strategy/ or combinations of strategies based on a guiding principles. The research focused on traffic control strategies that can be implemented by traffic signal systems. The research did not considered strategies that deals with demand reduction or seek to influence departure time choice, or route choice. The proposed timing methodology starts by detecting networkâ s critical routes as a necessary step to identify the traffic patterns and potential problematic scenarios. A wide array of control strategies are defined and categorized to address oversaturation problematic scenarios. A timing procedure was then developed using the principles of oversaturation timing in cycle selection, split allocation, offset design, demand overflow, and queue allocation in non-critical links. Three regimes of operation were defined and considered in oversaturation timing: (1) loading, (2) processing, and (3) recovery. The research also provides a closed-form formula for switching control plans during the oversaturation regimes. The selection of optimal control plan is formulated as linear integer programming problem. Microscopic simulation results of two arterial test cases revealed that traffic control strategies developed using the proposed framework led to tangible performance improvements when compared to signal control strategies designed for operations in under-saturated conditions. The generated control plans successfully manage to allocate queues in network links.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
27

Ujihara, Rintaro. "Multi-objective optimization for model selection in music classification." Thesis, KTH, Optimeringslära och systemteori, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-298370.

Full text
Abstract:
With the breakthrough of machine learning techniques, the research concerning music emotion classification has been getting notable progress combining various audio features and state-of-the-art machine learning models. Still, it is known that the way to preprocess music samples and to choose which machine classification algorithm to use depends on data sets and the objective of each project work. The collaborating company of this thesis, Ichigoichie AB, is currently developing a system to categorize music data into positive/negative classes. To enhance the accuracy of the existing system, this project aims to figure out the best model through experiments with six audio features (Mel spectrogram, MFCC, HPSS, Onset, CENS, Tonnetz) and several machine learning models including deep neural network models for the classification task. For each model, hyperparameter tuning is performed and the model evaluation is carried out according to pareto optimality with regard to accuracy and execution time. The results show that the most promising model accomplished 95% correct classification with an execution time of less than 15 seconds.
I och med genombrottet av maskininlärningstekniker har forskning kring känsloklassificering i musik sett betydande framsteg genom att kombinera olikamusikanalysverktyg med nya maskinlärningsmodeller. Trots detta är hur man förbehandlar ljuddatat och valet av vilken maskinklassificeringsalgoritm som ska tillämpas beroende på vilken typ av data man arbetar med samt målet med projektet. Denna uppsats samarbetspartner, Ichigoichie AB, utvecklar för närvarande ett system för att kategorisera musikdata enligt positiva och negativa känslor. För att höja systemets noggrannhet är målet med denna uppsats att experimentellt hitta bästa modellen baserat på sex musik-egenskaper (Mel-spektrogram, MFCC, HPSS, Onset, CENS samt Tonnetz) och ett antal olika maskininlärningsmodeller, inklusive Deep Learning-modeller. Varje modell hyperparameteroptimeras och utvärderas enligt paretooptimalitet med hänsyn till noggrannhet och beräkningstid. Resultaten visar att den mest lovande modellen uppnådde 95% korrekt klassificering med en beräkningstid på mindre än 15 sekunder.
APA, Harvard, Vancouver, ISO, and other styles
28

Hopkins, Scott Dale. "Modeling and Multi-Objective Optimization of the Helsinki District Heating System and Establishing the Basis for Modeling the Finnish Power Network." Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/23096.

Full text
Abstract:
Due to an increasing awareness of the importance of sustainable energy use, multi-objective optimization problems for upper-level energy systems are continually being developed and improved. This paper focuses on the modeling and optimization of the Helsinki district heating system and establishing the basis for modeling the Finnish power network. The optimization of the district heating system is conducted for a twenty four hour winter demand period. Partial load behavior of the generators is included by introducing non-linear functions for costs, emissions, and the exergetic efficiency. A fuel cost sensitivity analysis is conducted on the system by considering ten combinations of fuel costs based on high, medium, and low prices for each fuel. The solution sets, called Pareto fronts, are evaluated by post-processing techniques in order to determine the best solution from the optimal set. Because units between some of objective functions are non-commensurable, objective values are normalized and weighted. The results indicate that for today\'s fuel prices the best solution includes a dominating usage of natural gas technologies, while if the price of natural gas is higher than other fuels, natural gas technologies are often not included in the best solution. All of the necessary costs, emissions, and operating information is provided for the the Finnish power network in order to employ a multi-objective optimization on the system.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
29

Ordaz, Irian. "A probabilistic and multi-objective conceptual design methodology for the evaluation of thermal management systems on air-breathing hypersonic vehicles." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26478.

Full text
Abstract:
Thesis (Ph.D)--Aerospace Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Mavris, Dimitri N.; Committee Member: German, Brian J.; Committee Member: Osburg, Jan; Committee Member: Ruffin, Stephen M.; Committee Member: Schrage, Daniel P.. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
30

Nikolaou, Christos. "A multi-objective genetic algorithm optimisation using variable speed pumps in water distribution systems." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/6819/.

Full text
Abstract:
Due to its practical importance and inherent complexity, the optimisation of distribution networks for supplying drinking water has been the subject of extensive study for the past 30 years. The optimization is governed by sizing the pipes in the water distribution network (WDN) and / or optimises specific parts of the network such as pumps, tanks etc. or try to analyse and optimise the reliability of a WDN. In this thesis, the author has analysed two different WDNs (Anytown City and Cabrera city networks), trying to solve and optimise a multi-objective optimisation problem (MOOP). The main two objectives in both cases were the minimisation of Energy Cost (€) or Energy consumption (kWh), along with the total Number of pump switches (TNps) during a day. For this purpose, a decision support system generator for Multi-objective optimisation used. Its name is GANetXL and has been developed by the Center of Water System in the University of Exeter. GANetXL, works by calling the EPANET hydraulic solver, each time a hydraulic analysis has been fulfilled. The main algorithm used, was a second-generation algorithm for multi-objective optimisation called NSGA_II that gave us the Pareto fronts of each configuration. The first experiment that has been carried out was the network of Anytown city. It is a big network with a pump station of four fixed speed parallel pumps that are boosting the water dynamics. The main intervention was to change these pumps to new Variable speed driven pumps (VSDPs), by installing inverters capable to diverse their velocity during the day. Hence, it’s been achieved great Energy and cost savings along with minimisation in the number of pump switches. The results of the research are thoroughly illustrated in chapter 7, with comments and a variety of graphs and different configurations. The second experiment was about the network of Cabrera city. The smaller WDN had a unique FS pump in the system. The problem was the same as far as the optimisation process was concerned, thus, the minimisation of the energy consumption and in parallel the minimisation of TNps. The same optimisation tool has been used (GANetXL).The main scope was to carry out several and different experiments regarding a vast variety of configurations, using different pump (but this time keeping the FS mode), different tank levels, different pipe diameters and different emitters coefficient. All these different modes came up with a large number of results that were compared in the chapter 8. Concluding, it should be said that the optimisation of WDNs is a very interested field that has a vast space of options to deal with. This includes a large number of algorithms to choose from, different techniques and configurations to be made and different support system generators. The researcher has to be ready to “roam” between these choices, till a satisfactory result will convince him/her that has reached a good optimisation point.
APA, Harvard, Vancouver, ISO, and other styles
31

Binois, Mickaël. "Uncertainty quantification on pareto fronts and high-dimensional strategies in bayesian optimization, with applications in multi-objective automotive design." Thesis, Saint-Etienne, EMSE, 2015. http://www.theses.fr/2015EMSE0805/document.

Full text
Abstract:
Cette thèse traite de l’optimisation multiobjectif de fonctions coûteuses, aboutissant à laconstruction d’un front de Pareto représentant l’ensemble des compromis optimaux. En conception automobile, le budget d’évaluations est fortement limité par les temps de simulation numérique des phénomènes physiques considérés. Dans ce contexte, il est courant d’avoir recours à des « métamodèles » (ou modèles de modèles) des simulateurs numériques, en se basant notamment sur des processus gaussiens. Ils permettent d’ajouter séquentiellement des observations en conciliant recherche locale et exploration. En complément des critères d’optimisation existants tels que des versions multiobjectifs du critère d’amélioration espérée, nous proposons d’estimer la position de l’ensemble du front de Pareto avec une quantification de l’incertitude associée, à partir de simulations conditionnelles de processus gaussiens. Une deuxième contribution reprend ce problème à partir de copules. Pour pouvoir traiter le cas d’un grand nombre de variables d’entrées, nous nous basons sur l’algorithme REMBO. Par un tirage aléatoire directionnel, défini par une matrice, il permet de trouver un optimum rapidement lorsque seules quelques variables sont réellement influentes (mais inconnues). Plusieurs améliorations sont proposées, elles comprennent un noyau de covariance dédié, une sélection du domaine de petite dimension et des directions aléatoires mais aussi l’extension au casmultiobjectif. Enfin, un cas d’application industriel en crash a permis d’obtenir des gainssignificatifs en performance et en nombre de calculs requis, ainsi que de tester le package R GPareto développé dans le cadre de cette thèse
This dissertation deals with optimizing expensive or time-consuming black-box functionsto obtain the set of all optimal compromise solutions, i.e. the Pareto front. In automotivedesign, the evaluation budget is severely limited by numerical simulation times of the considered physical phenomena. In this context, it is common to resort to “metamodels” (models of models) of the numerical simulators, especially using Gaussian processes. They enable adding sequentially new observations while balancing local search and exploration. Complementing existing multi-objective Expected Improvement criteria, we propose to estimate the position of the whole Pareto front along with a quantification of the associated uncertainty, from conditional simulations of Gaussian processes. A second contribution addresses this problem from a different angle, using copulas to model the multi-variate cumulative distribution function. To cope with a possibly high number of variables, we adopt the REMBO algorithm. From a randomly selected direction, defined by a matrix, it allows a fast optimization when only a few number of variables are actually influential, but unknown. Several improvements are proposed, such as a dedicated covariance kernel, a selection procedure for the low dimensional domain and of the random directions, as well as an extension to the multi-objective setup. Finally, an industrial application in car crash-worthiness demonstrates significant benefits in terms of performance and number of simulations required. It has also been used to test the R package GPareto developed during this thesis
APA, Harvard, Vancouver, ISO, and other styles
32

Daskilewicz, Matthew John. "Methods for parameterizing and exploring Pareto frontiers using barycentric coordinates." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47658.

Full text
Abstract:
The research objective of this dissertation is to create and demonstrate methods for parameterizing the Pareto frontiers of continuous multi-attribute design problems using barycentric coordinates, and in doing so, to enable intuitive exploration of optimal trade spaces. This work is enabled by two observations about Pareto frontiers that have not been previously addressed in the engineering design literature. First, the observation that the mapping between non-dominated designs and Pareto efficient response vectors is a bijection almost everywhere suggests that points on the Pareto frontier can be inverted to find their corresponding design variable vectors. Second, the observation that certain common classes of Pareto frontiers are topologically equivalent to simplices suggests that a barycentric coordinate system will be more useful for parameterizing the frontier than the Cartesian coordinate systems typically used to parameterize the design and objective spaces. By defining such a coordinate system, the design problem may be reformulated from y = f(x) to (y,x) = g(p) where x is a vector of design variables, y is a vector of attributes and p is a vector of barycentric coordinates. Exploration of the design problem using p as the independent variables has the following desirable properties: 1) Every vector p corresponds to a particular Pareto efficient design, and every Pareto efficient design corresponds to a particular vector p. 2) The number of p-coordinates is equal to the number of attributes regardless of the number of design variables. 3) Each attribute y_i has a corresponding coordinate p_i such that increasing the value of p_i corresponds to a motion along the Pareto frontier that improves y_i monotonically. The primary contribution of this work is the development of three methods for forming a barycentric coordinate system on the Pareto frontier, two of which are entirely original. The first method, named "non-domination level coordinates," constructs a coordinate system based on the (k-1)-attribute non-domination levels of a discretely sampled Pareto frontier. The second method is based on a modification to an existing "normal boundary intersection" multi-objective optimizer that adaptively redistributes its search basepoints in order to sample from the entire frontier uniformly. The weights associated with each basepoint can then serve as a coordinate system on the frontier. The third method, named "Pareto simplex self-organizing maps" uses a modified a self-organizing map training algorithm with a barycentric-grid node topology to iteratively conform a coordinate grid to the sampled Pareto frontier.
APA, Harvard, Vancouver, ISO, and other styles
33

Deng, Qichen. "Antenna Optimization in Long-Term Evolution Networks." Thesis, KTH, Optimeringslära och systemteori, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-119147.

Full text
Abstract:
The aim of this master thesis is to study algorithms for automatically tuning antenna parameters to improve the performance of the radio access part of a telecommunication network and user experience. There are four dierent optimization algorithms, Stepwise Minimization Algorithm, Random Search Algorithm, Modied Steepest Descent Algorithm and Multi-Objective Genetic Algorithm to be applied to a model of a radio access network. The performances of all algorithms will be evaluated in this thesis. Moreover, a graphical user interface which is developed to facilitate the antenna tuning simulations will also be presented in the appendix of the report.
APA, Harvard, Vancouver, ISO, and other styles
34

NAIK, AMIT R. "TRADEOFF ANALYSIS FOR HELICAL GEAR REDUCTION UNITS." University of Cincinnati / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1129591522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Le, Trung-Dung. "Gestion de masses de données dans une fédération de nuages informatiques." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S101.

Full text
Abstract:
Les fédérations de nuages informatiques peuvent être considérées comme une avancée majeure dans l’informatique en nuage, en particulier dans le domaine médical. En effet, le partage de données médicales améliorerait la qualité des soins. La fédération de ressources permettrait d'accéder à toutes les informations, même sur une personne mobile, avec des données hospitalières distribuées sur plusieurs sites. En outre, cela permettrait d’envisager de plus grands volumes de données sur plus de patients et ainsi de fournir des statistiques plus fines. Les données médicales sont généralement conformes à la norme DICOM (Digital Imaging and Communications in Medicine). Les fichiers DICOM peuvent être stockés sur différentes plates-formes, telles qu’Amazon, Microsoft, Google Cloud, etc. La gestion des fichiers, y compris le partage et le traitement, sur ces plates-formes, suit un modèle de paiement à l’utilisation, selon des modèles de prix distincts et en s’appuyant sur divers systèmes de gestion de données (systèmes de gestion de données relationnelles ou SGBD ou systèmes NoSQL). En outre, les données DICOM peuvent être structurées en lignes ou colonnes ou selon une approche hybride (ligne-colonne). En conséquence, la gestion des données médicales dans des fédérations de nuages soulève des problèmes d’optimisation multi-objectifs (MOOP - Multi-Objective Optimization Problems) pour (1) le traitement des requêtes et (2) le stockage des données, selon les préférences des utilisateurs, telles que le temps de réponse, le coût monétaire, la qualités, etc. Ces problèmes sont complexes à traiter en raison de la variabilité de l’environnement (liée à la virtualisation, aux communications à grande échelle, etc.). Pour résoudre ces problèmes, nous proposons MIDAS (MedIcal system on clouD federAtionS), un système médical sur les fédérations de groupes. Premièrement, MIDAS étend IReS, une plate-forme open source pour la gestion de flux de travaux d’analyse sur des environnements avec différents systèmes de gestion de bases de données. Deuxièmement, nous proposons un algorithme d’estimation des valeurs de coût dans une fédération de nuages, appelé Algorithme de régression %multiple linéaire dynamique (DREAM). Cette approche permet de s’adapter à la variabilité de l'environnement en modifiant la taille des données à des fins de formation et de test, et d'éviter d'utiliser des informations expirées sur les systèmes. Troisièmement, l’algorithme génétique de tri non dominé à base de grilles (NSGA-G) est proposé pour résoudre des problèmes d’optimisation multi-crtières en présence d’espaces de candidats de grande taille. NSGA-G vise à trouver une solution optimale approximative, tout en améliorant la qualité du font de Pareto. En plus du traitement des requêtes, nous proposons d'utiliser NSGA-G pour trouver une solution optimale approximative à la configuration de données DICOM. Nous fournissons des évaluations expérimentales pour valider DREAM, NSGA-G avec divers problèmes de test et jeux de données. DREAM est comparé à d'autres algorithmes d'apprentissage automatique en fournissant des coûts estimés précis. La qualité de la NSGA-G est comparée à celle des autres algorithmes NSGA présentant de nombreux problèmes dans le cadre du MOEA. Un jeu de données DICOM est également expérimenté avec NSGA-G pour trouver des solutions optimales. Les résultats expérimentaux montrent les qualités de nos solutions en termes d'estimation et d'optimisation de problèmes multi-objectifs dans une fédération de nuages
Cloud federations can be seen as major progress in cloud computing, in particular in the medical domain. Indeed, sharing medical data would improve healthcare. Federating resources makes it possible to access any information even on a mobile person with distributed hospital data on several sites. Besides, it enables us to consider larger volumes of data on more patients and thus provide finer statistics. Medical data usually conform to the Digital Imaging and Communications in Medicine (DICOM) standard. DICOM files can be stored on different platforms, such as Amazon, Microsoft, Google Cloud, etc. The management of the files, including sharing and processing, on such platforms, follows the pay-as-you-go model, according to distinct pricing models and relying on various systems (Relational Data Management Systems or DBMSs or NoSQL systems). In addition, DICOM data can be structured following traditional (row or column) or hybrid (row-column) data storages. As a consequence, medical data management in cloud federations raises Multi-Objective Optimization Problems (MOOPs) for (1) query processing and (2) data storage, according to users preferences, related to various measures, such as response time, monetary cost, qualities, etc. These problems are complex to address because of heterogeneous database engines, the variability (due to virtualization, large-scale communications, etc.) and high computational complexity of a cloud federation. To solve these problems, we propose a MedIcal system on clouD federAtionS (MIDAS). First, MIDAS extends IReS, an open source platform for complex analytics workflows executed over multi-engine environments, to solve MOOP in the heterogeneous database engines. Second, we propose an algorithm for estimating of cost values in a cloud environment, called Dynamic REgression AlgorithM (DREAM). This approach adapts the variability of cloud environment by changing the size of data for training and testing process to avoid using the expire information of systems. Third, Non-dominated Sorting Genetic Algorithm based ob Grid partitioning (NSGA-G) is proposed to solve the problem of MOOP is that the candidate space is large. NSGA-G aims to find an approximate optimal solution, while improving the quality of the optimal Pareto set of MOOP. In addition to query processing, we propose to use NSGA-G to find an approximate optimal solution for DICOM data configuration. We provide experimental evaluations to validate DREAM, NSGA-G with various test problem and dataset. DREAM is compared with other machine learning algorithms in providing accurate estimated costs. The quality of NSGA-G is compared to other NSGAs with many problems in MOEA framework. The DICOM dataset is also experimented with NSGA-G to find optimal solutions. Experimental results show the good qualities of our solutions in estimating and optimizing Multi-Objective Problem in a cloud federation
APA, Harvard, Vancouver, ISO, and other styles
36

Faccioli, Rodrigo Antonio. "Algoritmo híbrido multi-objetivo para predição de estrutura terciária de proteínas." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/18/18153/tde-15052007-153736/.

Full text
Abstract:
Muitos problemas de otimização multi-objetivo utilizam os algoritmos evolutivos para encontrar as melhores soluções. Muitos desses algoritmos empregam as fronteiras de Pareto como estratégia para obter tais soluções. Entretando, conforme relatado na literatura, há a limitação da fronteira para problemas com até três objetivos, podendo tornar seu emprego insatisfatório para os problemas com quatro ou mais objetivos. Além disso, as propostas apresentadas muitas vezes eliminam o emprego dos algoritmos evolutivos, os quais utilizam tais fronteiras. Entretanto, as características dos algoritmos evolutivos os qualificam para ser empregados em problemas de otimização, como já vem sendo difundido pela literatura, evitando eliminá-lo por causa da limitação das fronteiras de Pareto. Assim sendo, neste trabalho se buscou eliminar as fronteiras de Pareto e para isso utilizou a lógica Fuzzy, mantendo-se assim o emprego dos algoritmos evolutivos. O problema escolhido para investigar essa substituição foi o problema de predição de estrutura terciária de proteínas, pois além de se encontrar em aberto é de suma relevância para a área de bioinformática.
Several multi-objective optimization problems utilize evolutionary algorithms to find the best solution. Some of these algoritms make use of the Pareto front as a strategy to find these solutions. However, according to the literature, the Pareto front limitation for problems with up to three objectives can make its employment unsatisfactory in problems with four or more objectives. Moreover, many authors, in most cases, propose to remove the evolutionay algorithms because of Pareto front limitation. Nevertheless, characteristics of evolutionay algorithms qualify them to be employed in optimization problems, as it has being spread out by literature, preventing to eliminate it because the Pareto front elimination. Thus being, this work investigated to remove the Pareto front and for this utilized the Fuzzy logic, remaining itself thus the employ of evolutionary algorithms. The choice problem to investigate this remove was the protein tertiary structure prediction, because it is a open problem and extremely relevance to bioinformatic area.
APA, Harvard, Vancouver, ISO, and other styles
37

Bakhsh, Ahmed. "A Posteriori and Interactive Approaches for Decision-Making with Multiple Stochastic Objectives." Doctoral diss., University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5897.

Full text
Abstract:
Computer simulation is a popular method that is often used as a decision support tool in industry to estimate the performance of systems too complex for analytical solutions. It is a tool that assists decision-makers to improve organizational performance and achieve performance objectives in which simulated conditions can be randomly varied so that critical situations can be investigated without real-world risk. Due to the stochastic nature of many of the input process variables in simulation models, the output from the simulation model experiments are random. Thus, experimental runs of computer simulations yield only estimates of the values of performance objectives, where these estimates are themselves random variables. Most real-world decisions involve the simultaneous optimization of multiple, and often conflicting, objectives. Researchers and practitioners use various approaches to solve these multiobjective problems. Many of the approaches that integrate the simulation models with stochastic multiple objective optimization algorithms have been proposed, many of which use the Pareto-based approaches that generate a finite set of compromise, or tradeoff, solutions. Nevertheless, identification of the most preferred solution can be a daunting task to the decision-maker and is an order of magnitude harder in the presence of stochastic objectives. However, to the best of this researcher's knowledge, there has been no focused efforts and existing work that attempts to reduce the number of tradeoff solutions while considering the stochastic nature of a set of objective functions. In this research, two approaches that consider multiple stochastic objectives when reducing the set of the tradeoff solutions are designed and proposed. The first proposed approach is an a posteriori approach, which uses a given set of Pareto optima as input. The second approach is an interactive-based approach that articulates decision-maker preferences during the optimization process. A detailed description of both approaches is given, and computational studies are conducted to evaluate the efficacy of the two approaches. The computational results show the promise of the proposed approaches, in that each approach effectively reduces the set of compromise solutions to a reasonably manageable size for the decision-maker. This is a significant step beyond current applications of decision-making process in the presence of multiple stochastic objectives and should serve as an effective approach to support decision-making under uncertainty.
Ph.D.
Doctorate
Industrial Engineering and Management Systems
Engineering and Computer Science
Industrial Engineering
APA, Harvard, Vancouver, ISO, and other styles
38

Constantinou, Demetrakis. "Ant colony optimisation algorithms for solving multi-objective power-aware metrics for mobile ad hoc networks." Thesis, University of Pretoria, 2010. http://hdl.handle.net/2263/25981.

Full text
Abstract:
A mobile ad hoc network (MANET) is an infrastructure-less multi-hop network where each node communicates with other nodes directly or indirectly through intermediate nodes. Thus, all nodes in a MANET basically function as mobile routers participating in some routing protocol required for deciding and maintaining the routes. Since MANETs are infrastructure-less, self-organizing, rapidly deployable wireless networks, they are highly suitable for applications such as military tactical operations, search and rescue missions, disaster relief operations, and target tracking. Building such ad-hoc networks poses a significant technical challenge because of energy constraints and specifically in relation to the application of wireless network protocols. As a result of its highly dynamic and distributed nature, the routing layer within the wireless network protocol stack, presents one of the key technical challenges in MANETs. In particular, energy efficient routing may be the most important design criterion for MANETs since mobile nodes are powered by batteries with limited capacity and variable recharge frequency, according to application demand. In order to conserve power it is essential that a routing protocol be designed to guarantee data delivery even should most of the nodes be asleep and not forwarding packets to other nodes. Load distribution constitutes another important approach to the optimisation of active communication energy. Load distribution enables the maximisation of the network lifetime by facilitating the avoidance of over-utilised nodes when a route is in the process of being selected. Routing algorithms for mobile networks that attempt to optimise routes while at- tempting to retain a small message overhead and maximise the network lifetime has been put forward. However certain of these routing protocols have proved to have a negative impact on node and network lives by inadvertently over-utilising the energy resources of a small set of nodes in favour of others. The conservation of power and careful sharing of the cost of routing packets would ensure an increase in both node and network lifetimes. This thesis proposes simultaneously, by using an ant colony optimisation (ACO) approach, to optimise five power-aware metrics that do result in energy-efficient routes and also to maximise the MANET's lifetime while taking into consideration a realistic mobility model. By using ACO algorithms a set of optimal solutions - the Pareto-optimal set - is found. This thesis proposes five algorithms to solve the multi-objective problem in the routing domain. The first two algorithms, namely, the energy e±ciency for a mobile network using a multi-objective, ant colony optimisation, multi-pheromone (EEMACOMP) algorithm and the energy efficiency for a mobile network using a multi-objective, ant colony optimisation, multi-heuristic (EEMACOMH) algorithm are both adaptations of multi-objective, ant colony optimisation algorithms (MOACO) which are based on the ant colony system (ACS) algorithm. The new algorithms are constructive which means that in every iteration, every ant builds a complete solution. In order to guide the transition from one state to another, the algorithms use pheromone and heuristic information. The next two algorithms, namely, the energy efficiency for a mobile network using a multi-objective, MAX-MIN ant system optimisation, multi-pheromone (EEMMASMP) algorithm and the energy efficiency for a mobile network using a multi-objective, MAX- MIN ant system optimisation, multi-heuristic (EEMMASMH) algorithm, both solve the above multi-objective problem by using an adaptation of the MAX-MIN ant system optimisation algorithm. The last algorithm implemented, namely, the energy efficiency for a mobile network using a multi-objective, ant colony optimisation, multi-colony (EEMACOMC) algorithm uses a multiple colony ACO algorithm. From the experimental results the final conclusions may be summarised as follows:
  • Ant colony, multi-objective optimisation algorithms are suitable for mobile ad hoc networks. These algorithms allow for high adaptation to frequent changes in the topology of the network.
  • All five algorithms yielded substantially better results than the non-dominated sorting genetic algorithm (NSGA-II) in terms of the quality of the solution.
  • All the results prove that the EEMACOMP outperforms the other four ACO algorithms as well as the NSGA-II algorithm in terms of the number of solutions, closeness to the true Pareto front and diversity.

Thesis (PhD)--University of Pretoria, 2010.
Computer Science
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
39

Wilding, Paul Richard. "The Development of a Multi-Objective Optimization and Preference Tool to Improve the Design Process of Nuclear Power Plant Systems." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7515.

Full text
Abstract:
The complete design process for a new nuclear power plant concept is costly, long, complicated, and the work is generally split between several specialized groups. These design groups separately do their best to design the portion of the reactor that falls in their expertise according to the design criteria before passing the design to the subsequent design group. Ultimately, the work of each design group is combined, with significant iteration between groups striving to facilitate the integration of each of the heavily interdependent systems. Such complex interaction between experts leads to three significant problems: (1) the issues associated with knowledge management, (2) the lack of design optimization, and (3) the failure to discover the hidden interdependencies between different design parameters that may exist. Some prior work has been accomplished in both developing common frame of reference (CFR) support systems to aid in the design process and applying optimization to nuclear system design.The purpose of this work is to use multi-objective optimization to address the second and third problems above on a small subset of reactor design scenarios. Multi-objective optimization generates several design optima in the form of a Pareto front, which portrays the optimal trade-off between design objectives. As a major part of this work, a system design optimization tool is created, namely the Optimization and Preference Tool for the Improvement of Nuclear Systems (OPTIONS). The OPTIONS tool is initially applied to several individual nuclear systems: the power conversion system (PCS) of the Integral, Inherently Safe Light Water Reactor (I²S-LWR), the Kalina cycle being proposed as the PCS for a LWR, the PERCS (or Passive Endothermic Reaction Cooling System), and the core loop of the Zion plant. Initial sensitivity analysis work and the application of the Non-dominated Sorting Particle Swarm Optimization (NSPSO) method provides a Pareto front of design optima for the PCS of the I²S-LWR, while bringing to light some hidden pressure interdependencies for generating steam using a flash drum. A desire to try many new PCS configurations leads to the development of an original multi-objective optimization method, namely the Mixed-Integer Non-dominated Sorting Genetic Algorithm (MI-NSGA). With this method, the OPTIONS tool provides a novel and improved Pareto front with additional optimal PCS configurations. Then, the simpler NSGA method is used to optimize the Kalina cycle, the PERCS, and the Zion core loop, providing each problem with improved designs and important objective trade-off information. Finally, the OPTIONS tool uses the MI-NSGA method to optimize the integration of three systems (Zion core loop, PERCS, and Rankine cycle PCS) while increasing efficiency, decreasing costs, and improving performance. In addition, the tool is outfitted to receive user preference input to improve the convergence of the optimization to a Pareto front.
APA, Harvard, Vancouver, ISO, and other styles
40

Sthapit, Saurav. "Computation offloading for algorithms in absence of the Cloud." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31440.

Full text
Abstract:
Mobile cloud computing is a way of delegating complex algorithms from a mobile device to the cloud to complete the tasks quickly and save energy on the mobile device. However, the cloud may not be available or suitable for helping all the time. For example, in a battlefield scenario, the cloud may not be reachable. This work considers neighbouring devices as alternatives to the cloud for offloading computation and presents three key contributions, namely a comprehensive investigation of the trade-off between computation and communication, Multi-Objective Optimisation based approach to offloading, and Queuing Theory based algorithms that present the benefits of offloading to neighbours. Initially, the states of neighbouring devices are considered to be known and the decision of computation offloading is proposed as a multi-objective optimisation problem. Novel Pareto optimal solutions are proposed. The results on a simulated dataset show up to 30% increment in performance even when cloud computing is not available. However, information about the environment is seldom known completely. In Chapter 5, a realistic environment is considered such as delayed node state information and partially connected sensors. The network of sensors is modelled as a network of queues (Open Jackson network). The offloading problem is posed as minimum cost problem and solved using Linear solvers. In addition to the simulated dataset, the proposed solution is tested on a real computer vision dataset. The experiments on the random waypoint dataset showed up to 33% boost on performance whereas in the real dataset, exploiting the temporal and spatial distribution of the targets, a significantly higher increment in performance is achieved.
APA, Harvard, Vancouver, ISO, and other styles
41

Essien, Mmekutmfon Sunday. "A multiobjective optimization model for optimal placement of solar collectors." Diss., University of Pretoria, 2012. http://hdl.handle.net/2263/30954.

Full text
Abstract:
The aim and objective of this research is to formulate and solve a multi-objective optimization problem for the optimal placement of multiple rows and multiple columns of fixed flat-plate solar collectors in a field. This is to maximize energy collected from the solar collectors and minimize the investment in terms of the field and collector cost. The resulting multi-objective optimization problem will be solved using genetic algorithm techniques. It is necessary to consider multiple columns of collectors as this can result in obtaining higher amounts of energy from these collectors when costs and maintenance or replacement of damaged parts are concerned. The formulation of such a problem is dependent on several factors, which include shading of collectors, inclination of collectors, distance between the collectors, latitude of location and the global solar radiation (direct beam and diffuse components). This leads to a multi-objective optimization problem. These kind of problems arise often in nature and can be difficult to solve. However the use of evolutionary algorithm techniques has proven effective in solving these kind of problems. Optimizing the distance between the collector rows, the distance between the collector columns and the collector inclination angle, can increase the amount of energy collected from a field of solar collectors thereby maximizing profit and improving return on investment. In this research, the multi-objective optimization problem is solved using two optimization approaches based on genetic algorithms. The first approach is the weighted sum approach where the multi-objective problem is simplified into a single objective optimization problem while the second approach is finding the Pareto front.
Dissertation (MEng)--University of Pretoria, 2012.
Electrical, Electronic and Computer Engineering
MEng
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
42

Pal, Anibrata. "Multi-objective optimization in learn to pre-compute evidence fusion to obtain high quality compressed web search indexes." Universidade Federal do Amazonas, 2016. http://tede.ufam.edu.br/handle/tede/5128.

Full text
Abstract:
Submitted by Sáboia Nágila (nagila.saboia01@gmail.com) on 2016-07-29T14:09:40Z No. of bitstreams: 1 Disertação-Anibrata Pal.pdf: 1139751 bytes, checksum: a29e1923e75e239365abac2dc74c7f40 (MD5)
Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2016-08-15T17:54:46Z (GMT) No. of bitstreams: 1 Disertação-Anibrata Pal.pdf: 1139751 bytes, checksum: a29e1923e75e239365abac2dc74c7f40 (MD5)
Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2016-08-15T17:57:29Z (GMT) No. of bitstreams: 1 Disertação-Anibrata Pal.pdf: 1139751 bytes, checksum: a29e1923e75e239365abac2dc74c7f40 (MD5)
Made available in DSpace on 2016-08-15T17:57:29Z (GMT). No. of bitstreams: 1 Disertação-Anibrata Pal.pdf: 1139751 bytes, checksum: a29e1923e75e239365abac2dc74c7f40 (MD5) Previous issue date: 2016-04-19
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
The world of information retrieval revolves around web search engines. Text search engines are one of the most important source for routing information. The web search engines index huge volumes of data and handles billions of documents. The learn to rank methods have been adopted in the recent past to generate high quality answers for the search engines. The ultimate goal of these systems are to provide high quality results and, at the same time, reduce the computational time for query processing. Drawing direct correlation from the aforementioned fact; reading from smaller or compact indexes always accelerate data read or in other words, reduce computational time during query processing. In this thesis we study about using learning to rank method to not only produce high quality ranking of search results, but also to optimize another important aspect of search systems, the compression achieved in their indexes. We show that it is possible to achieve impressive gains in search engine index compression with virtually no loss in the final quality of results by using simple, yet effective, multi objective optimization techniques in the learning process. We also used basic pruning techniques to find out the impact of pruning in the compression of indexes. In our best approach, we were able to achieve more than 40% compression of the existing index, while keeping the quality of results at par with methods that disregard compression.
Máquinas de busca web para a web indexam grandes volumes de dados, lidando com coleções que muitas vezes são compostas por dezenas de bilhões de documentos. Métodos aprendizagem de máquina têm sido adotados para gerar as respostas de alta qualidade nesses sistemas e, mais recentemente, há métodos de aprendizagem de máquina propostos para a fusão de evidências durante o processo de indexação das bases de dados. Estes métodos servem então não somente para melhorar a qualidade de respostas em sistemas de busca, mas também para reduzir custos de processamento de consultas. O único método de fusão de evidências em tempo de indexação proposto na literatura tem como foco exclusivamente o aprendizado de funções de fusão de evidências que gerem bons resultados durante o processamento de consulta, buscando otimizar este único objetivo no processo de aprendizagem. O presente trabalho apresenta uma proposta onde utiliza-se o método de aprendizagem com múltiplos objetivos, visando otimizar, ao mesmo tempo, tanto a qualidade de respostas produzidas quando o grau de compressão do índice produzido pela fusão de rankings. Os resultados apresentados indicam que a adoção de um processo de aprendizagem com múltiplos objetivos permite que se obtenha melhora significativa na compressão dos índices produzidos sem que haja perda significativa na qualidade final do ranking produzido pelo sistema.
APA, Harvard, Vancouver, ISO, and other styles
43

Furuhashi, Takeshi, Tomohiro Yoshikawa, and Fumiya Kudo. "A Study on Analysis of Design Variables in Pareto Solutions for Conceptual Design Optimization Problem of Hybrid Rocket Engine." IEEE, 2011. http://hdl.handle.net/2237/20699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Vanden, Boom Michael T. "Weak cost automata over infinite trees." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:16c6de98-545f-4d2d-acda-efc040049452.

Full text
Abstract:
Cost automata are traditional finite state automata enriched with a finite set of counters that can be manipulated on each transition. Based on the evolution of counter values, a cost automaton defines a function from the set of structures under consideration to the natural numbers extended with infinity, modulo a boundedness relation that ignores exact values but preserves boundedness properties. Historically, variants of cost automata have been used to solve problems in language theory such as the star height problem. They also have a rich theory in their own right as part of the theory of regular cost functions, which was introduced by Colcombet as an extension to the theory of regular languages. It subsumes the classical theory since a language can be associated with the function that maps every structure in the language to 0 and everything else to infinity; it is a strict extension since cost functions can count some behaviour within the input. Regular cost functions have been previously studied over finite words and trees. This thesis extends the theory to infinite trees, where classical parity automata are enriched with a finite set of counters. Weak cost automata, which have priorities {0,1} or {1,2} and an additional restriction on the structure of the transition function, are shown to be equivalent to a weak cost monadic logic. A new notion of quasi-weak cost automata is also studied and shown to arise naturally in this cost setting. Moreover, a decision procedure is given to determine whether or not functions definable using weak or quasi-weak cost automata are equivalent up to the boundedness relation, which also proves the decidability of the weak cost monadic logic over infinite trees. The semantics of these cost automata over infinite trees are defined in terms of cost-parity games which are two-player infinite games where one player seeks to minimize the counter values and satisfy the parity condition, and the other player seeks to maximize the counter values or sabotage the parity condition. The main contributions and key technical results involve proving that certain cost-parity games admit positional or finite-memory strategies. These results also help settle the decidability of some special cases of long-standing open problems in the classical theory. In particular, it is shown that it is decidable whether a regular language of infinite trees is recognizable using a nondeterministic co-Büchi automaton. Likewise, given a Büchi or co-Büchi automaton as input, it is decidable whether or not there is a weak automaton recognizing the same language.
APA, Harvard, Vancouver, ISO, and other styles
45

Bobrow, Kirsten Louise. "The effects of childbearing on women's body mass index, and on the risk of diabetes mellitus, or ischaemic heart disease after the menopause." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:5c86d91f-973d-4ff7-a8e7-4e8642a541df.

Full text
Abstract:
Background: Excess adiposity, diabetes mellitus, and ischaemic heart disease are common important causes of morbidity and premature mortality in postmenopausal women in the UK. A large amount of data exists on known risk factors for these conditions, and for risk factors men and women share there is little evidence to suggest sex-based differences. It has been suggested that factors unique to women (such as parity and breastfeeding) may also influence risk. The nature of the relationship between childbearing and these conditions remains to be clarified. In this thesis I explore the association between women’s childbearing histories and their adiposity, and risk of diabetes or ischaemic heart disease after the menopause, to provide evidence on the character, repeatability and public health relevance of the associations. Aim: To explore the hypothesis that childbearing (specifically parity and breastfeeding) is associated with women’s body weight and risk of excess adiposity, and also with women’s risk of diabetes mellitus, and ischaemic heart disease after the menopause. Methods: Data are analysed from a large population-based cohort of middle-aged UK women recruited in 1996 to 2001 (the Million Women Study) with complete childbearing information, and who had baseline anthropometry, and were followed for incident diabetes or ischaemic heart disease through repeat survey questionnaires, hospital admission records, and central registry databases. Results: In a large ethnically homogeneous population of postmenopausal UK women increasing parity was associated with an increase in BMI, however this increase was offset in women who breastfed. The associations between parity, breastfeeding and BMI were of a similar order of magnitude to established risk factors known to be associated with BMI, for example smoking, and physical activity. The associations between childbearing and women’s risk of diabetes mellitus after the menopause appear to be largely due to the effects of childbearing on maternal BMI. There is only limited evidence to suggest a direct effect of childbearing on women’s risk of diabetes after the menopause. There is statistically significant evidence of an association between childbearing and women’s risk of ischaemic heart disease after the menopause. Parity was associated with a modest increase in risk whereas breastfeeding was associated with a small decrease in risk, however the effects were small in comparison to known important risk factors. Conclusions In a large population of UK women childbearing was found to have a persistent influence on women’s mean BMI after the menopause, and through this postmenopausal risk of diabetes mellitus. Childbearing was also found to be mod-estly associated with women’s risk of ischaemic heart disease after the menopause.
APA, Harvard, Vancouver, ISO, and other styles
46

Patel, Chirag B. "A multi-objective stochastic approach to combinatorial technology space exploration." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29647.

Full text
Abstract:
Thesis (Ph.D)--Aerospace Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Dr. Dimitri N. Mavris; Committee Member: Dr. Brian J. German; Committee Member: Dr. Daniel P. Schrage; Committee Member: Dr. Frederic Villeneuve; Committee Member: Dr. Michelle R. Kirby; Committee Member: Ms. Antje Lembcke. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
47

Pilati, Francesco. "Multi-objective Models and Methods for Design and Management of Sustainable Logistic Systems." Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3424337.

Full text
Abstract:
Logistics is typically defined as the design and operation of the physical, managerial and informational systems needed to allow goods to overcome space and time. Traditional models and methods for logistic system design and management focus on the optimization of the techno-economic performances. However, logistic activities are distinguished by a huge environmental impact. For instance, the final energy consumption for freight transportation reached in recent years the alarming value of 13% of the total end-use energy worldwide, equal to 40 EJ per year. Thus, innovative techniques for logistic system design and management have to guarantee these system overall sustainability not only from a technical and economic perspective but also from an environmental viewpoint. To this end, multi-objective optimization is of strong help. This is a mathematical programming technique to systematically and simultaneously optimize a collection of objective functions, often conflicting among them. Considering this scenario, aim of this Ph.D. thesis is to develop, propose and validate innovative multi-objective models and methods for design and management of sustainable logistic systems simultaneously optimizing the system technical performance, economic profitability and environmental impact. The developed models fully manage the material flow from suppliers to assembly or manufacturing areas and from these to final customers through the distribution, storage and retrieving activities among and within the logistic actors. An original decision support system is proposed to jointly minimize the operating cost, carbon footprint and delivery time in the design of multi-modal multi-level distribution networks considering the most relevant features of the delivered products. Concerning warehousing systems, both design and operation problems are tackled. A multi-objective optimization model is developed to determine the warehouse building configuration, namely length, width and height, which simultaneously minimizes travel time, total cost and carbon footprint objective functions. These two latter are estimated through a lifecycle approach. All the activities related to warehouse building installation and operating phases are evaluated both from an economic and an environmental perspective. Warehousing system operation is analyzed by means of storage assignment strategy. A time and energy based strategy is proposed to jointly minimize the travel time and the energy required by the material handling vehicles to store and retrieve the unit loads. Proper vehicle motion configuration and unit load features are considered to accurately model the objective functions. Finally, the presented models and methods are tested and validated against case studies from the food and beverage industry. The results demonstrate that a tremendous environmental impact reduction is possible at negligible technical and economic performance worsening.
La logistica viene tipicamente definita come l’insieme di quelle attività di progettazione e gestione di sistemi fisici ed informativi necessari per consentire alle diverse tipologie di merci di superare lo spazio ed il tempo. I modelli ed i metodi tradizionali per la progettazione e gestione dei sistemi logistici si focalizzano sull’ottimizzazione delle prestazioni tecnico-economiche. Tuttavia, le attività logistiche si contraddistinguono per un elevato impatto ambientale. Solo per citare un esempio, il consumo di energia per il trasporto merci ha raggiunto negli ultimi anni il 13% dell’energia complessivamente utilizzata su scala mondiale, pari cioè a 40 EJ annui. Gli approcci innovativi per la progettazione e gestione di sistemi logistici devono necessariamente garantire la loro sostenibilità non solo da un punto di vista tecnico ed economico, ma anche da quello ambientale. A tal fine, l’ottimizzazione multi-obiettivo è di notevole aiuto. Questo metodo di programmazione matematica permette di ottimizzare sistematicamente e simultaneamente un insieme di funzioni obiettivo spesso contrastanti tra loro. Alla luce di questo scenario, lo scopo di questa tesi di dottorato è quello di sviluppare, proporre e validare modelli e metodi multi-obiettivo innovativi per la progettazione e la gestione di sistemi logistici sostenibili ottimizzando contemporaneamente le loro prestazioni tecniche, economiche ed ambientali. I modelli sviluppati permettono di gestire nella sua interezza il flusso di materiali dai fornitori ai reparti di fabbricazione o assemblaggio e da questi ai clienti finali attraverso le necessarie attività di distribuzione, stoccaggio e prelievo all’interno e tra gli attori della catena logistica. E’ stato sviluppato un sistema per il supporto decisionale atto a minimizzare contemporaneamente il costo operativo, la carbon footprint ed il tempo di trasporto di reti distributive multi-livello e multi-modali prendendo in considerazione le più importanti caratteristiche dei prodotti trasportati. Per quanto riguarda i sistemi di immagazzinamento e stoccaggio, questa tesi affronta sia le tematiche di progettazione sia quelle operative. Un modello di ottimizzazione multi-obiettivo è proposto per definire la configurazione degli edifici atti allo stoccaggio merci, ovvero la loro lunghezza, larghezza ed altezza, al fine di minimizzare il tempo di prelievo, il costo totale e la carbon footprint. Queste ultime due funzioni obiettivo sono state valutate considerando l’intero ciclo di vita del magazzino. Tutte le attività relative alle fasi di installazione ed esercizio dell’edificio vengono contabilizzate sia da un punto di vista economico che ambientale. Per quanto concerne la gestione operativa di un sistema di immagazzinamento, questa tesi affrontata il problema dell’assegnazione dei prodotti ai vani di stoccaggio. Si è definito un modello di ottimizzazione multi-obiettivo per minimizzare contestualmente il tempo e l’energia necessari alle attività di prelievo e stoccaggio. Per modellare opportunamente le funzioni obiettivo temporali ed energetiche sono stati valutati accuratamente sia i profili di moto dei veicoli per lo stoccaggio merce sia le caratteristiche dei prodotti da immagazzinare. Per concludere, i modelli ed i metodi presentati sono stati validati e testati con casi studio provenienti dall’industria alimentare. I risultati ottenuti dimostrano come sia possibile ridurre drasticamente l’impatto ambientale di questi sistemi logistici a scapito di un trascurabile peggioramento delle prestazioni tecnico ed economiche.
APA, Harvard, Vancouver, ISO, and other styles
48

DESHMUKH, DINAR VIVEK. "Design Optimization of Mechanical Components." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1028738547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Jaini, Nor. "An efficient ranking analysis in multi-criteria decision making." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/an-efficient-ranking-analysis-in-multicriteria-decision-making(c5a694d5-fd43-434f-9f9f-b86f7581b97c).html.

Full text
Abstract:
This study is conducted with the aims to develop a new ranking method for multi-criteria decision making problem with conflicting criteria. Such a problem has a set of Pareto solutions, where the act of improving a value of one solution will result in depreciating some of the others. Thus, in this type of problem, there is no unique solution. However, out of many available options, the Decision Maker eventually has to choose only one solution. With this problem as the motivation, the current study develops a compromise ranking algorithm, namely a trade-off ranking method. The trade-off ranking method able to give a trade-off solution with the least compromise compared to other choices as the best solution. The properties of the algorithm are studied in the thesis on several test cases. The proposed method is compared against several multi-criteria decision making methods with ranking based on the distance measure, which are the TOPSIS, relative distance and VIKOR. The sensitivity analysis and uncertainty test are carried out to examine the methods robustness. A critical criteria analysis is also done to test for the most critical criterion in a multi-criteria problem. The decision making method is considered further in a fuzzy environment problem where the fuzzy trade-off ranking is developed and compared against existing fuzzy decision making methods.
APA, Harvard, Vancouver, ISO, and other styles
50

Lu, Ming. "Synergetic Attenuation of Stray Magnetic Field in Inductive Power Transfer." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/78621.

Full text
Abstract:
Significant stray magnetic field exists around the coils when charging the electric vehicles (EVs) with inductive power transfer (IPT), owning to the large air gap between the transmitter and receiver. The methods for field attenuation usually introduce extra losses and reduce the efficiency. This study focuses on the synergetic attenuation of stray magnetic field which is optimized simultaneously with the efficiency. The optimization is realized with Pareto front. In this dissertation, three methods are discussed for the field attenuation. The first method is to tune the physical parameters of the winding, such as the inner radii, outer radii, distribution of the turns, and types of the litz wires. The second method is to add metal shields around the IPT coils, in which litz wires are used as shields to reduce the shielding losses. The third method is to control the phases of winding currents, which avoids increasing the size and weight of the IPT coils. To attenuate the stray magnetic field by tuning the physical parameters, the conventional method is to sweep all the physical parameters in finite-element simulation. This takes thousands of simulations to derive the Pareto front, and it's especially time-consuming for three-dimensional simulations. This dissertation demonstrates a faster method to derive the Pareto front. The windings are replaced by the lumped loops. As long as the number of turns for each loop is known, the efficiency and magnetic field are calculated directly from the permeance matrices and current-to-field matrices. The sweep of physical parameters in finite-element simulation is replaced by the sweep of the turns numbers for the lumped loops in calculation. Only tens of simulations are required in the entire procedure, which are used to derive the matrices. An exemplary set of coils was built and tested. The efficiency from the matrix calculation is the same as the experimental measurement. The difference for stray magnetic field is less than 12.5%. Metal shields attenuate the stray magnetic field effectively, but generates significant losses owning to the uneven distribution of shield currents. This dissertation uses litz wires to replace the conventional plate shield or ring shield. Skin effect is eliminated so the shield currents are uniformly distributed and the losses are reduced. The litz shields are categorized to two types: shorted litz shield and driven litz shield. Circuit models are derived to analyze their behaviors. The concept of lumped-loop model is applied to derive the Pareto front of efficiency versus stray magnetic field for the coils with litz shield. In an exemplary IPT system, coils without metal shield and with metal shields are optimized for the same efficiency. Both the simulation and experimental measurement verify that the shorted litz shield has the best performance. The stray magnetic field is attenuated by 65% compared to the coils without shield. This dissertation also introduces the method to attenuate the stray magnetic field by controlling the phases of winding currents. The magnetic field around the coils is decomposed to the component in the axial direction and the component in the radial direction. The axial component decreases with smaller phase difference between windings' currents, while the radial component exhibits the opposite property. Because the axial component is dominant around the IPT coils, decreasing the phase difference is preferred. The dual-side-controlled converter is applied for the circuit realization. Bridges with active switches are used for both the inverter on the transmitter side and the rectifier on the receiver side. The effectiveness of this method was verified both in simulation and experiment. Compared to the conventional series-series IPT with 90° phase difference between winding currents, stray magnetic field was attenuated by up to 30% and 40% when the phase differences of winding currents are 50° and 40°, respectively. Furthermore, an analytical method is investigated to calculate the proximity-effect resistance of the planar coils with ferrite plate. The objective of this method is to work together with the fast optimization which uses the lumped-loop model. The existence of the ferrite plate complicates the calculation of the magnetic field across each turn which is critical to derive the proximity-effect resistance. In this dissertation, the ferrite plate is replaced by the mirrored turns according to the method of image. The magnetic fields are then obtained from Ampere's Law and Biot-Savart Law. Up to 200 kHz, the difference of the proximity-effect resistance is less than 15% between calculation and measurement.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography