Segui questo link per vedere altri tipi di pubblicazioni sul tema: Optimisation.

Tesi sul tema "Optimisation"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Optimisation".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Moser, Irene. "Applying external optimisation to dynamic optimisation problems". Swinburne Research Bank, 2008. http://hdl.handle.net/1959.3/22526.

Testo completo
Abstract (sommario):
Thesis (Ph.D) - Swinburne University of Technology, Faculty of Information & Communication Technologies, 2008.
[A thesis submitted in total fulfillment of the requirements of for the degree of Doctor of Philosophy, Faculty of Information and Communication Technologies, Swinburne University of Technology, 2008]. Typescript. Includes bibliographical references p. 193-201.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Poole, Daniel. "Efficient optimisation methods for generic aerodynamic shape optimisation". Thesis, University of Bristol, 2017. http://hdl.handle.net/1983/66904840-aed2-4797-9fb1-9d2b1d8b86ae.

Testo completo
Abstract (sommario):
Aerodynamic shape optimisation amalgamates three, often independent, modules; a shape control system, a flow solver and an optimisation algorithm. The cornerstone is the shape control system, which governs the mapping between the real, continuous aerodynamic shape to the discretised surface that is used within the computational domain, using a set of design variables. The overall objectives of this work are to have efficient optimisation and design space exploration. Hence the focus of this work is first, shape control, and second, the optimisation algorithm. First, a novel shape control system has been developed and is presented in this thesis that gives large design space coverage using very few design variables. The method uses a singular value decomposition (SVD) approach to extract the optimal reduced set of orthogonal aerofoil shape 'modes' from an existing library of aerofoils. Performing an SVD is guaranteed to produce an optimal representation of the original library; a powerful result. It is shown that different initial libraries of aerofoils result in different modes, each suited to their own design specification i.e. modes from transonic aerofoils are effective for transonic design. This method is shown to be highly efficient, with very few shape modes (fewer than ten, and sometimes as few as six) required to represent a wide range of aerofoils to within a typical wind tunnel tolerance. This is compared to the PARSEC method, which fails to represent any of the aerofoils tested to within the required tolerance, and the Hick-Henne method, that requires 12 to 16 bumps. The efficiency that comes with the aerofoil modes can be fully exploited by performing global optimisation, and this is the second objective of this work. However, aerodynamic optimisation requires satisfaction of constraints. Constraint handling occurs using ad hoc techniques that are often not universally transferable between global optimisation algorithms. As such, an effective universal constraint handling framework has been developed and presented in this thesis. To demonstrate the universality of the framework, it is coupled to four different global optimisation algorithms (particle swarm, gravitational search, a hybrid of the two, and differential evolution) and used to optimise a number of analytical benchmark problems. It is compared, and shown to outperform, other universal constraint handling techniques that use penalty and feasible direction approaches, with feasibility rates shown to be higher than 90% with the new framework, compared to 50-80% for the other frameworks. When coupling differential evolution to the new framework, on a number of benchmark engineering problems, the results are equivalent to the best results published in the literature. The development of efficient shape design variables and an effective constraint handling framework allows efficient global aerodynamic optimisation to be realised. A large number of transonic inviscid and viscous aerofoil optimisations are presented and it is demonstrated that as few as six aerofoil modes are sufficient to produce shock-free solutions for inviscid and viscous cases. Global wing planform optimisations are also considered. It is shown that when using chord variations only, two distinct minima are found that have almost equivalent drag reductions (around 25%) but at completely different locations within in the design space. The addition of further planform and non-planar design variables increases the multimodality found in the design space.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Tall, Abdoulaye. "Optimisation et Auto-Optimisation dans les réseaux LTE". Thesis, Avignon, 2015. http://www.theses.fr/2015AVIG0208/document.

Testo completo
Abstract (sommario):
Le réseau mobile d’Orange France comprend plus de 100 000 antennes 2G, 3G et 4G sur plusieurs bandes de fréquences sans compter les nombreuses femto-cells fournies aux clients pour résoudre les problèmes de couverture. Ces chiffres ne feront que s’accroître pour répondre à la demande sans cesse croissante des clients pour les données mobiles. Cela illustre le défi énorme que rencontrent les opérateurs de téléphonie mobile en général à savoir gérer un réseau aussi complexe tout en limitant les coûts d’opération pour rester compétitifs. Cette thèse s’attache à utiliser le concept SON (réseaux auto-organisants) pour réduire cette complexité en automatisant les tâches répétitives ou complexes. Plus spécifiquement, nous proposons des algorithmes d’optimisation automatique pour des scénarios liés à la densification par les small cells ou les antennes actives. Nous abordons les problèmes classiques d’équilibrage de charge mais avec un lien backhaul à capacité limitée et de coordination d’interférence que ce soit dans le domaine temporel (notamment avec le eICIC) ou le domaine fréquentiel. Nous proposons aussi des algorithmes d’activation optimale de certaines fonctionnalités lorsque cette activation n’est pas toujours bénéfique. Pour la formulation mathématique et la résolution de tous ces algorithmes, nous nous appuyons sur les résultats de l’approximation stochastique et de l’optimisation convexe. Nous proposons aussi une méthodologie systématique pour la coordination de multiples fonctionnalités SON qui seraient exécutées en parallèle. Cette méthodologie est basée sur les jeux concaves et l’optimisation convexe avec comme contraintes des inégalités matricielles linéaires
The mobile network of Orange in France comprises more than 100 000 2G, 3G and 4G antennas with severalfrequency bands, not to mention many femto-cells for deep-indoor coverage. These numbers will continue toincrease in order to address the customers’ exponentially increasing need for mobile data. This is an illustrationof the challenge faced by the mobile operators for operating such a complex network with low OperationalExpenditures (OPEX) in order to stay competitive. This thesis is about leveraging the Self-Organizing Network(SON) concept to reduce this complexity by automating repetitive or complex tasks. We specifically proposeautomatic optimization algorithms for scenarios related to network densification using either small cells orActive Antenna Systems (AASs) used for Vertical Sectorization (VeSn), Virtual Sectorization (ViSn) and multilevelbeamforming. Problems such as load balancing with limited-capacity backhaul and interference coordination eitherin time-domain (eICIC) or in frequency-domain are tackled. We also propose optimal activation algorithms forVeSn and ViSn when their activation is not always beneficial. We make use of results from stochastic approximationand convex optimization for the mathematical formulation of the problems and their solutions. We also proposea generic methodology for the coordination of multiple SON algorithms running in parallel using results fromconcave game theory and Linear Matrix Inequality (LMI)-constrained optimization
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Salazar, Lechuga Maximino. "Multi-objective optimisation using sharing in swarm optimisation algorithms". Thesis, University of Birmingham, 2009. http://etheses.bham.ac.uk//id/eprint/303/.

Testo completo
Abstract (sommario):
Many problems in the real world are multi-objective by nature, this means that many times there is the need to satisfy a problem with more than one goal in mind. These type of problems have been studied by economists, mathematicians, between many more, and recently computer scientists. Computer scientists have been developing novel methods to solve this type of problems with the help of evolutionary computation. Particle Swarm Optimisation (PSO) is a relatively new heuristic that shares some similarities with evolutionary computation techniques, and that recently has been successfully modified to solve multi-objective optimisation problems. In this thesis we first review some of the most relevant work done in the area of PSO and multi-objective optimisation, and then we proceed to develop an heuristic capable to solve this type of problems. An heuristic, which probes to be very competitive when tested over synthetic benchmark functions taken from the specialised literature, and compared against state-of-the-art techniques developed up to this day; we then further extended this heuristic to make it more competitive. Almost at the end of this work we incursion into the area of dynamic multi-objective optimisation, by testing the capabilities and analysing the behaviour of our technique in dynamic environments.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Yin, Xuefei. "Application of multidisciplinary design optimisation to engine calibration optimisation". Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/5630.

Testo completo
Abstract (sommario):
Automotive engines are becoming increasingly technically complex and associated legal emissions standards more restrictive, making the task of identifying optimum actuator settings to use significantly more difficult. Given these challenges, this research aims to develop a process for engine calibration optimisation by exploiting advanced mathematical methods. Validation of this work is based upon a case study describing a steady-state Diesel engine calibration problem. The calibration optimisation problem seeks an optimal combination of actuator settings that minimises fuel consumption, while simultaneously meeting or exceeding the legal emissions constraints over a specified drive cycle. As another engineering target, the engine control maps are required as smooth as possible. The Multidisciplinary Design Optimisation (MDO) Frameworks have been studied to develop the optimisation process for the steady state Diesel engine calibration optimisation problem. Two MDO strategies are proposed for formulating and addressing this optimisation problem, which are All At Once (AAO), Collaborative Optimisation. An innovative MDO formulation has been developed based on the Collaborative Optimisation application for Diesel engine calibration. Form the MDO implementations, the fuel consumption have been significantly improved, while keep the emission at same level compare with the bench mark solution provided by sponsoring company. More importantly, this research has shown the ability of MDO methodologies that manage and organize the Diesel engine calibration optimisation problem more effectively.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Heinonen, Annika. "Process Optimisation - An empirical study of process optimisation in Finland". Thesis, Högskolan i Jönköping, Internationella Handelshögskolan, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-17746.

Testo completo
Abstract (sommario):
The objective of this master’s thesis is to determine methods for improving a company’s business processes without investing in new technology and whether a relatively small company can benefit from investing in technology. This study determines the meaning of process optimisation and how it should be conducted. Using existing theory and the case of a logistics company operating in Finland, this research attempts to identify hindrances and find opportunities for the company to develop their processes through process optimisation without technology. Different public bodies in Finland (such as the Finnish government and Statistics Finland) have stated that Finnish logistics requires development and have recommended new technology as a solution to the issue. However, the lack of information on the Finnish logistics business sector makes such statements by public bodies difficult to analyse. Process optimisation has been revealed to be more complex than expected. Many theories available today examine and recommend different technological solutions to execute companies’ work processes. However, a theory is needed on how process optimisation can be carried out at a company lacking technology. Process optimisation consists of process modelling and process analysis. Process modelling appears to be the most significant and crucial aspect of process optimisation. Order-to-delivery processes cannot be optimised within a company if the company does not understand the entirety of such processes. Knowledge of the process has been highlighted as being key to understanding a company’s processes at a high level. The case company in this study showed that process optimisation is possible without implementing new technology; instead, optimisation required additional human capital and a stronger focus on a company’s internal business processes. Technology-based solutions for process optimisation are tempting to implement as doing so may be believed to save time, but no automated solution is able to reveal a company’s critical information if the company does not know what it is looking for and cannot identify its problem areas. This research includes a single case study. The results indicate that whether a relatively small company could benefit from investing in technology is unclear, and the lack of research on process optimisation at Finnish companies resulted in limited findings and analysis. Several different scientific articles presented technology implementation successes and failures, but did not reveal information on the steps taken by the companies.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Soltun, Sindre. "Fleet Management Optimisation". Thesis, Norwegian University of Science and Technology, Department of Telematics, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10147.

Testo completo
Abstract (sommario):

This Master's Thesis is built around the concept of fleet management, focusing on designing and implementing a solution for such a purpose. As a target domain for this proposed system snow clearing has been chosen, and it is presented as background for the system realisation. An important feature in a fleet management system is route optimisation. Estimations based on real-world data can be used to construct more effective routes. This optimisation process is not straightforward though, as it belongs in a domain called Vehicle Routing Problems. These problems effectively becomes unsolvable for realisticly sized datasets using traditional optimisation methods, and the reasons behind this and alternative solution approaches are presented in this text. Enhanced fleet monitoring is another target for a fleet management system, and this requires modern localisation technology. To continuously be aware of every unit's position, an accurate tracking mechanism is necessary. Such mechanisms are also presented, focusing mainly on the Global Positioning System (GPS). To create the actual solution, a thorough design phase was necessary. The results of this process, including a requirement specification, a design model and a test plan, are included in this report. Based on the design phase parts of the system have been implemented, such as the graphical user interfaces and communication. The main focus of the implemetation has been on the optimisation process though, and several approaches have been tested. All implentation results, including testing results based on the test plan, can be found in this report. To offer operators a clear view of the positions of the fleet's units, a part of the system will need to work as a geographical information system. This functionality has not been implemented, but its requirements are discussed as well. To add a market perspective to this thesis a business model for a company developing the proposed solution is presented, along with a view on how the solution may affect the business model of companies that implement it into their operations. The last part of the report presents a discussion around the proposed solution. This discussion focuses on the qualities and shortcomings of the solution, how it compares to already existing solutions in the market, and what future work is necessary for the system to be completed.

Gli stili APA, Harvard, Vancouver, ISO e altri
8

Tournois, Jane. "Optimisation de maillages". Phd thesis, Université de Nice Sophia-Antipolis, 2009. http://tel.archives-ouvertes.fr/tel-00451619.

Testo completo
Abstract (sommario):
Dans cette thèse, une approche pratique pour la génération de maillages triangulaires isotropes est proposée. En 2D comme en 3D, l'ob jectif consiste à mailler un domaine donné, pouvant avoir une géométrie complexe. L'approche présentée consiste à entrelacer des étapes de raffinement de Delaunay et des étapes d'optimisation de maillages dans le but de générer des maillages gradés de qualité. L'utilisateur peut contrôler les caractéristiques du maillage en définissant des critères de taille et de forme des simplexes, ainsi que de topologie et d'approximation. Les méthodes par éléments finis, largement utilisées en simulation, nécessitent des maillages gradés, composés de simplexes bien formés. Des alternatives aux méthodes de raffinement de Delaunay usuelles sont développées. Les méthodes d'optimisation de maillages proposées permettent d'optimiser la position des sommets intérieurs et de ceux du bord. Les caractéristiques du bord du domaine à mailler, et en particulier des arêtes vives, sont préservées par ces méthodes. En 2D, l'optimisation est basée sur l'algorithme de Lloyd et les diagrammes de Voronoi centrés (CVT). En 3D, une extension naturelle des triangulations de Delaunay optimales (ODT) de Chen, capable d'optimiser la position des sommets du bord du maillage, est introduite. Notre algorithme de maillage tétraédrique est enrichi par une étape de post-traitement permettant d'améliorer de façon significative la qualité des angles dièdres du maillage. Nous montrons que l'entrelacement d'étapes de raffinement et d'optimisation permet d'obtenir des maillages de meilleure qualité que ceux générés par les méthodes connues en termes d'angles dans les simplexes et de complexité.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Ning, Michael Zhu. "Relational combinatorial optimisation". Thesis, University of Oxford, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.284507.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Oliver, Kathryn E. "UMTS network optimisation". Thesis, Cardiff University, 2005. http://orca.cf.ac.uk/54561/.

Testo completo
Abstract (sommario):
Network operators desire effective, pragmatic solutions to instances of the cell planning problem in order to improve their quality of service, enhance network coverage and capacity capability, and ultimately increase company profits. Previ ous cell plans have been constructed manually but these methods do not produce the best network configuration. More reliance has since been placed on automated cell planning to produce effective solutions. The introduction of the Universal Mobile Telecommunication System (UMTS) emphasizes the need for high perfor mance planning tools. Motivated by a discussion of the literature concerning cell planning, an existing model for Global System for Mobile Communication (GSM) is modified to take account of the requirements of UMTS networks. A suite of test cases is created using a purpose-built problem generator, including problems with a range of site and traffic distributions for rural, suburban and urban markets. Traditionally, cell planning has been seen purely as an optimisation problem, neglecting the pre-optimisation stage of network dimensioning. This thesis inves tigates the effect of network dimensioning as a precursor to optimisation demon strating the benefits of cell planning in three stages consisting of site estimation, site selection and optimisation. The first stage, site estimation, utilises previously published lower bounding techniques to provide a means of approximating the number of sites required to meet capacity targets in the uplink and downlink. Site selection compares random selection to three newly developed algorithms to make effective automatic selections of sites from a candidate set. The final optimiza tion phase presents a framework based on the tabu search meta-heuristic capable of optimising the dimensioned network designs with respect to the representative operational scenarios. Multiple traffic snapshot evaluations are considered in the optimisation objective function.
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Scott, Simon Michael. "Electronic Nose Optimisation". Thesis, Teesside University, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.518259.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Varoufakis, Y. "Optimisation and strikes". Thesis, University of Essex, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.377086.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Routley, Paul Richard. "BiCMOS circuit optimisation". Thesis, University of Southampton, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.242271.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Arbex, Valle Cristiano. "Portfolio optimisation models". Thesis, Brunel University, 2013. http://bura.brunel.ac.uk/handle/2438/10343.

Testo completo
Abstract (sommario):
In this thesis we consider three different problems in the domain of portfolio optimisation. The first problem we consider is that of selecting an Absolute Return Portfolio (ARP). ARPs are usually seen as financial portfolios that aim to produce a good return regardless of how the underlying market performs, but our literature review shows that there is little agreement on what constitutes an ARP. We present a clear definition via a three-stage mixed-integer zero-one program for the problem of selecting an ARP. The second problem considered is that of designing a Market Neutral Portfolio (MNP). MNPs are generally defined as financial portfolios that (ideally)exhibit performance independent from that of an underlying market, but, once again, the existing literature is very fragmented. We consider the problem of constructing a MNP as a mixed-integer non-linear program (MINLP) which minimises the absolute value of the correlation between portfolio return and underlying benchmark return. The third problem is related to Exchange-Traded Funds (ETFs). ETFs are funds traded on the open market which typically have their performance tied to a benchmark index. They are composed of a basket of assets; most attempt to reproduce the returns of an index, but a growing number try to achieve a multiple of the benchmark return, such as two times or the negative of the return. We present a detailed performance study of the current ETF market and we find, among other conclusions, constant underperformance among ETFs that aim to do more than simply track an index. We present a MINLP for the problem of selecting the basket of assets that compose an ETF, which, to the best of our knowledge, is the first in the literature. For all three models we present extensive computational results for portfolios derived from universes defined by S&P international equity indices with up to 1200 stocks. We use CPLEX to solve the ARP problem and the software package Minotaur for both our MINLPs for MNP and an ETF.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Grigoryev, Igor. "Mine evaluation optimisation". Thesis, Federation University Australia, 2019. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/170937.

Testo completo
Abstract (sommario):
The definition of a mineral resource during exploration is a fundamental part of lease evaluation, which establishes the fair market value of the entire asset being explored in the open market. Since exact prediction of grades between sampled points is not currently possible by conventional methods, an exact agreement between predicted and actual grades will nearly always contain some error. These errors affect the evaluation of resources so impacting on characterisation of risks, financial projections and decisions about whether it is necessary to carry on with the further phases or not. The knowledge about minerals below the surface, even when it is based upon extensive geophysical analysis and drilling, is often too fragmentary to indicate with assurance where to drill, how deep to drill and what can be expected. Thus, the exploration team knows only the density of the rock and the grade along the core. The purpose of this study is to improve the process of resource evaluation in the exploration stage by increasing prediction accuracy and making an alternative assessment about the spatial characteristics of gold mineralisation. There is significant industrial interest in finding alternatives which may speed up the drilling phase, identify anomalies, worthwhile targets and help in establishing fair market value. Recent developments in nonconvex optimisation and high-dimensional statistics have led to the idea that some engineering problems such as predicting gold variability at the exploration stage can be solved with the application of clusterwise linear and penalised maximum likelihood regression techniques. This thesis attempts to solve the distribution of the mineralisation in the underlying geology using clusterwise linear regression and convex Least Absolute Shrinkage and Selection Operator (LASSO) techniques. The two presented optimisation techniques compute predictive solutions within a domain using physical data provided directly from drillholes. The decision-support techniques attempt a useful compromise between the traditional and recently introduced methods in optimisation and regression analysis that are developed to improve exploration targeting and to predict the gold occurrences at previously unsampled locations.
Doctor of Philosophy
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Liu, Zhengliang. "Optimisation over the non-dominated set of a multi-objective optimisation problem". Thesis, Lancaster University, 2016. http://eprints.lancs.ac.uk/82563/.

Testo completo
Abstract (sommario):
In this thesis we are concerned with optimisation over the non-dominated set of a multiobjective optimisation problem. A multi-objective optimisation problem (MOP) involves multiple conflicting objective functions. The non-dominated set of this problem is of interest because it is composed of the “best” trade-off for a decision maker to choose according to his preference. We assume that this selection process can be modelled by maximising a function over the non-dominated set. We present two new algorithms for the optimisation of a linear function over the non-dominated set of a multi-objective linear programme (MOLP). A primal method is developed based on a revised version of Benson’s outer approximation algorithm. A dual method derived from the dual variant of the outer approximation algorithm is proposed. Taking advantage of some special properties of the problem, the new methods are designed to achieve better computational efficiency. We compare the two new algorithms with several algorithms from the literature on a set of randomly generated instances. The results show that the new algorithms are considerably faster than the competitors. We adapt the two new methods for the determination of the nadir point of (MOLP). The nadir point is characterized by the componentwise worst values of the non-dominated points of (MOP). This point is a prerequisite for many multi-criteria decision making (MCDM) procedures. Computational experiments against another exact method for this purpose from the literature reveal that the new methods are faster than the competitor. The last section of the thesis is devoted to optimising a linear function over the non-dominated set of a convex multi-objective problem. A convex multi-objective problem (CMOP) often involves nonlinear objective functions or constraints. We extend the primal and the dual methods to solve this problem. We compare the two algorithms with several existing algorithms from the literature on a set of randomly generated instances. The results reveal that the new methods are much faster than the others.
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Helbig, Marde. "Solving dynamic multi-objective optimisation problems using vector evaluated particle swarm optimisation". Thesis, University of Pretoria, 2012. http://hdl.handle.net/2263/28161.

Testo completo
Abstract (sommario):
Most optimisation problems in everyday life are not static in nature, have multiple objectives and at least two of the objectives are in conflict with one another. However, most research focusses on either static multi-objective optimisation (MOO) or dynamic singleobjective optimisation (DSOO). Furthermore, most research on dynamic multi-objective optimisation (DMOO) focusses on evolutionary algorithms (EAs) and only a few particle swarm optimisation (PSO) algorithms exist. This thesis proposes a multi-swarm PSO algorithm, dynamic Vector Evaluated Particle Swarm Optimisation (DVEPSO), to solve dynamic multi-objective optimisation problems (DMOOPs). In order to determine whether an algorithm solves DMOO efficiently, functions are required that resembles real world DMOOPs, called benchmark functions, as well as functions that quantify the performance of the algorithm, called performance measures. However, one major problem in the field of DMOO is a lack of standard benchmark functions and performance measures. To address this problem, an overview is provided from the current literature and shortcomings of current DMOO benchmark functions and performance measures are discussed. In addition, new DMOOPs are introduced to address the identified shortcomings of current benchmark functions. Guides guide the optimisation process of DVEPSO. Therefore, various guide update approaches are investigated. Furthermore, a sensitivity analysis of DVEPSO is conducted to determine the influence of various parameters on the performance of DVEPSO. The investigated parameters include approaches to manage boundary constraint violations, approaches to share knowledge between the sub-swarms and responses to changes in the environment that are applied to either the particles of the sub-swarms or the non-dominated solutions stored in the archive. From these experiments the best DVEPSO configuration is determined and compared against four state-of-the-art DMOO algorithms.
Thesis (PhD)--University of Pretoria, 2012.
Computer Science
unrestricted
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Wikborg, Uno. "Online Meat Cutting Optimisation". Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-8887.

Testo completo
Abstract (sommario):

Nortura, Norway’s largest producer of meat, faces many challenges in their operation. One of these challenges is to decide which products to make out of each of the slaughtered animals. The meat from the animals can be made into different products, some more valuable than others. However, someone has to buy the products as well. It is therefore important to produce what the customers ask for. This thesis is about a computer system based on online optimisation which helps the meat cutters decide what to make. Two different meat cutting plants have been visited to specify how the system should work. This information has been used to develop a program which can give a recommendation for what to produce from carcasses during cutting. The system has been developed by considering both the attributes of the animals and the orders from the customers. The main focus of the thesis is how to deal with the fact that the attributes are only known for a small number of the animals, since they are measured right after slaughtering. A method has been made to calculate what should be made from the different carcasses, and this method has been realised with both exact and heuristic algorithms.

Gli stili APA, Harvard, Vancouver, ISO e altri
19

Geng, Ke. "XML semantic query optimisation". Thesis, University of Auckland, 2011. http://hdl.handle.net/2292/6815.

Testo completo
Abstract (sommario):
XML Semantic Query Optimisation (XSQO) is a method that optimises execution of queries based on semantic constraints, which are extracted from XML documents. Currently most research into XSQO concentrates on optimisation based on structural constraints in the XML documents. Research, which optimises XML query execution based on semantic constraints, has been limited because of the flexibility of XML. In this thesis, we introduce a method, which optimises XML query execution based on the constraints on the content of XML documents. In our method, elements are analysed and classified based on the distribution of values of sub-elements. Information about the classification is extracted and represented in OWL, which is stored in the database together with the XML document. The user input XML query is evaluated and transformed to a new query, which will execute faster and return exactly the same results, based on the element classification information. There are three kinds of transformation that may be carried out in our method: Elimination, which blocks the non-result queries, Reduction, which simplifies the query conditions by removing redundant conditions, and Introduction, which reduces the search area by introducing a new query condition. Two engines are designed and built for the research. The data analysis engine is designed to analyse the XML documents and classify the specified elements. The query transformation engine evaluates the input XML queries and carries out the query transformation automatically based on the classification information. A case study has been carried out with the data analysis engine and we carried out a series of experiments with the query transformation engine. The results show that: a. XML documents can be analysed and elements can be classified using our method, and the classification results satisfy the requirement of XML query transformation. b. content based XML query transformation can improve XML query execution performance by about 20% to 30%. In this thesis, we also introduce a data generator, which is designed and built to support the research. With this generator, users can build semantic information into the XML dataset with specified structure, size and selectivity. A case study with the generator shows that the generator satisfies the requirements of content-based XSQO research.
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Balesdent, Mathieu. "Optimisation multidisciplinaire de lanceurs". Phd thesis, Ecole centrale de Nantes, 2011. http://tel.archives-ouvertes.fr/tel-00659362.

Testo completo
Abstract (sommario):
La conception de lanceurs est un problème d'optimisation multidisciplinaire (MDO) complexe qui a la particularité d'intégrer une optimisation de trajectoire très contrainte, difficile à résoudre et fortement couplée à toutes les autres disciplines entrant en jeu dans le processus de conception (e.g. propulsion, aérodynamique, structure, etc.). Cette thèse s'intéresse aux méthodes permettant d'intégrer judicieusement l'optimisation de la trajectoire au sein du processus d'optimisation des variables de conception. Une nouvelle méthode, appelée "Stage-Wise decomposition for Optimal Rocket Design" (SWORD), a été proposée. Celle-ci décompose le processus de conception suivant les différentes phases de vol et transforme le problème d'optimisation de lanceur multiétage en un problème de coordination des optimisations de chacun des étages, plus facile à résoudre. La méthode SWORD a été comparée à la méthode MDO classique (Multi Discipline Feasible) dans le cas d'une optimisation globale d'un lanceur tri-étage. Les résultats montrent que la méthode SWORD permet d'améliorer l'efficacité du processus d'optimisation, tant au niveau de la vitesse de recherche de l'espace de solutions faisables que de la qualité de l'optimum trouvé en temps de calcul limité. Afin d'améliorer la vitesse de convergence de la méthode tout en ne requérant pas de connaissance a priori de l'utilisateur au niveau de l'initialisation et l'espace de recherche, une stratégie d'optimisation dédiée à la méthode SWORD a été développée.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Bouhaya, Lina. "Optimisation structurelle des gridshells". Phd thesis, Université Paris-Est, 2010. http://tel.archives-ouvertes.fr/tel-00583409.

Testo completo
Abstract (sommario):
Le terme gridshell désigne une coque discrète qui est obtenue par déformation élastique d'une grille bidirectionnelle continue plane sans rigidité en cisaillement puis rigidifiée par une troisième direction de barres. Ainsi défini, un gridshell a un potentiel structural intéressant et peut répondre à des exigences architecturales complexes. La recherche de forme de ces structures a été menée à travers l'histoire principalement par deux méthodes, la méthode du filet inversé et la relaxation dynamique. Ces deux méthodes permettent d'obtenir une forme approchée de celle proposée par l'architecte, dérivant d'une grille à plat et de conditions aux limites partiellement ou complètement imposées. Dans le cadre de cette thèse, nous nous sommes intéressés à générer un gridshell sur une surface à forme et contours imposés. Un outil numérique se basant sur la méthode du compas a été développé. Il permet de mailler un réseau de Tchebychev sur une surface connaissant son équation cartésienne. Un autre outil permettant le maillage se basant sur un calcul en éléments finis explicite a été mis en œuvre. La particularité de cette technique est de pouvoir tenir en compte des propriétés mécaniques de la structure et de simuler le comportement du gridshell. Des applications des deux méthodes sur des formes architecturalement intéressantes ont permis de voir les limitations de la possibilité de mailler une forme avec un réseau de Tchebychev. La méthode du compas a ensuite été couplée à des algorithmes métaheuristiques types génétiques. L'algorithme résultant permet d'optimiser un gridshell en minimisant la courbure dans les barres et donc les contraintes dans la structure introduites lors de la mise en forme. Il a été mis en œuvre et testé pour plusieurs surfaces
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Bukula, Nwabisa Asanda. "Optimisation of clearcoat viscosity". Thesis, Nelson Mandela Metropolitan University, 2016. http://hdl.handle.net/10948/4814.

Testo completo
Abstract (sommario):
Modern automobiles are painted with basecoat, technology which is either metallic, solid colour or pearlescent. This requires protection from chemicals, scratching, weathering and UV light by applying a protective top coat (clearcoat) over the basecoat. For the clearcoat to cure into a hard protective shell it undergoes an irreversible crosslinking process. This usually takes place over the first four to five hours, depending on the formulation and weather conditions. The speed of crosslinking can be enhanced by temperature. Pot life is important as it can affect the overall quality of the painted surface. If crosslinking occurs too quickly, before the clearcoat is applied onto the surface, the clearcoat cannot be used to produce a good quality finish. The “expired” mixture is thus discarded. If used, the quality of the finished product cannot be guaranteed to last, and the paintwork may have to be redone. This often means removing the underlying paint and primer as well with the clearcoat film. Besides the time lost, the discarded clearcoat mixture often lands in the landfill, polluting ground water and the environment. It is thus important from the point of view of both the environment preservation and waste management, that as much clearcoat as possible is used without being wasted. It was proven in an earlier study (BSc Hon Formulation Science Treatise, 2011) that adding eugenol to a clearcoat mixture after crosslinking had started could reduce its viscosity, which is an indicator of crosslinking progress. Crosslinking subsequently resumed at a lower rate than in traditional blends. If stored away from oxygen and high temperatures, this blend could maintain optimum viscosity indefinitely. In this follow up study an optimum formulation was developed using D - optimal experimental design. It sought to extend the pot life to avoid waste to spray painters while saving the environment from pollution. The formulation that gave the desired viscosity after five hours of pot life was adopted. It was hypothesised that the optimum formulated clearcoat mixture would have a longer pot life than its traditional counterparts, and that it would perform just as well as the traditional clearcoat mixtures. To study the rate of crosslinking (disappearance of functional groups and appearance of the urethane bond), FTIR spectrometry was performed on a mixture produced from the optimized formula in comparison to that of a traditional mixture (the control). The rate of disappearance of functional groups was found to be lower in the eugenol mixture than in the control mixture. After six hours, eugenol was added into the control mixture, and this seemed to reduce the viscosity with the re-emergence of functional groups in the mixture. After 24 hours of crosslinking, an FTIR scan was done on the solid sample and this revealed that the eugenol mixture had crosslinked fully, with no detectable functional groups in the sample.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Tsang, King Hei. "Vaccine supply chain optimisation". Thesis, Imperial College London, 2006. http://hdl.handle.net/10044/1/7545.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Tagliaferri, Francesca. "Dynamic yacht strategy optimisation". Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/16237.

Testo completo
Abstract (sommario):
Yacht races are won by good sailors racing fast boats. A good skipper takes decisions at key moments of the race based on the anticipated wind behaviour and on his position on the racing area and with respect to the competitors. His aim is generally to complete the race before all his opponents, or, when this is not possible, to perform better than some of them. In the past two decades some methods have been proposed to compute optimal strategies for a yacht race. Those strategies are aimed at minimizing the expected time needed to complete the race and are based on the assumption that the faster a yacht, the higher the number of races that it will win (and opponents that it will defeat). In a match race, however, only two yachts are competing. A skipper’s aim is therefore to complete the race before his opponent rather than completing the race in the shortest possible time. This means that being on average faster may not necessarily mean winning the majority of races. This thesis sets out to investigate the possibility of computing a sailing strategy for a match race that can defeat an opponent who is following a fixed strategy that minimises the expected time of completion of the race. The proposed method includes two novel aspects in the strategy computation: A short-term wind forecast, based on an Artificial Neural Network (ANN) model, is performed in real time during the race using the wind measurements collected on board. Depending on the relative position with respect to the opponent, decisions with different levels of risk aversion are computed. The risk attitude is modeled using Coherent Risk Measures. The proposed algorithm is implemented in a computer program and is tested by simulating match races between identical boats following progressively refined strategies. Results presented in this thesis show how the intuitive idea of taking more risk when losing and having a conservative attitude when winning is confirmed in the risk model used. The performance of ANN for short-term wind forecasting is tested both on wind speed and wind direction. It is shown that for time steps of the order of seconds and adequate computational power ANN perform better than linear models (persistence models, ARMA) and other nonlinear models (Support Vector Machines). The outcome of the simulated races confirms that maximising the probability of winning a match race does not necessarily correspond to minimising the expected time needed to complete the race.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Datta, Subhalakshmi. "UMTS radio network optimisation". Thesis, University of Surrey, 2010. http://epubs.surrey.ac.uk/843921/.

Testo completo
Abstract (sommario):
Radio network planning and optimisation include tasks such as propagation prediction, performance evaluation, and configuration optimisation. Unlike Time Division Multiple Access (TDMA) based systems, the network performance of Third Generation (3G) cellular systems cannot be based on signal predictions only. The maximum achievable path loss in a Wideband Code Division Multiple Access (WCDMA) cell is dependent on the cell load, which creates a complex trade-off between coverage and capacity. Given a user distribution, network performance calculation involves optimisation of transmit powers taking into consideration the location of each user. This mathematical problem is Non-deterministic polynomial time (NP) hard even for small and simplified instances and the solution process is time consuming. This research focuses on methods that allow fast and intuitive performance evaluation for WCDMA networks. In this work, a novel Semi-Analytical Model (SeAM) for user-capacity calculation, based on static snapshot input, is proposed. The model uses a centralised approach that allows the identification of users with the highest impact on link quality. These users can be removed from the system before going through the iterative procedure of solving the NP hard problem. Therefore, this combined power control and user removal algorithm converges to the optimum solution in a lower number of iteration steps compared to a conventional stepwise user removal scheme or a distributed system level simulation. This thesis also documents research findings in the area of network performance analysis using realistic instances, classical propagation models, and varying traffic scenarios. Performance indicators calculated on the basis of analytical methods are compared with results generated by an advanced WCDMA simulator in order to estimate the accuracy and validate assumptions related to analytical modelling of the WCDMA air-interface. Network planners are generally faced with the challenge of controlling network coverage and capacity at minimum infrastructure cost while maintaining user satisfaction. In this thesis, a configuration optimisation algorithm, based on a Simulated Annealing framework, is proposed for automatic planning of Node B locations and antenna configurations. The automatic planning software is implemented using a WCDMA simulator and SeAM, respectively, as core algorithm for network performance evaluation and cost function calculation.
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Giagkiozis, Ioannis. "Nonconvex many-objective optimisation". Thesis, University of Sheffield, 2012. http://etheses.whiterose.ac.uk/3683/.

Testo completo
Abstract (sommario):
As many-objective optimisation problems become more prevalent, evolutionary algorithms that are based on Pareto dominance relations are slowly becoming less popular due to severe limitations that such an approach has for this class of problems. At the same time decomposition-based methods, which have been employed traditionally in mathematical programming, are consistently increasing in popularity. These developments have been led by recent research studies that show that decomposition-based algorithms have very good convergence properties compared to Pareto-based algorithms. Decomposition-based methods use a scalarising function to decompose a problem with multiple objectives into several single objective subproblems. The subproblems are defined with the help of weighting vectors. The location on the Pareto front that each subproblem tends to converge, strongly depends on the choice of weighting vectors and the scalarising function. Therefore, the selection of an appropriate set of weighting vectors to decompose the multi-objective problem, determines the distribution of the final Pareto set approximation along the Pareto front. Currently a limiting factor in decomposition-based methods is that the distribution of Pareto optimal points cannot be directly controlled, at least not to a satisfactory degree. Generalised Decomposition is introduced in this thesis as a way to optimally solve this problem and enable the analyst and the decision maker define and obtain the desired distribution of Pareto optimal solutions. Furthermore, many algorithms generate a set of Pareto optimal solutions. An interesting question is whether such a set can be used to generate more solutions in specific locations of the Pareto front. Pareto Estimation - a method introduced in this thesis - answers this question quite positively. The decision maker, using the Pareto Estimation method can request a set of solutions in a particular region on the Pareto front, and although not guaranteed to be generated in the exact location, it is shown that the spatial accuracy of the produced solutions is very high. Also the cost of generating these solutions is several orders of magnitude lower compared with the alternative to restart the optimisation.
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Uzor, Chigozirim. "Compact dynamic optimisation algorithm". Thesis, De Montfort University, 2015. http://hdl.handle.net/2086/13056.

Testo completo
Abstract (sommario):
In recent years, the field of evolutionary dynamic optimisation has seen significant increase in scientific developments and contributions. This is as a result of its relevance in solving academic and real-world problems. Several techniques such as hyper-mutation, hyper-learning, hyper-selection, change detection and many more have been developed specifically for solving dynamic optimisation problems. However, the complex structure of algorithms employing these techniques make them unsuitable for real-world, real-time dynamic optimisation problem using embedded systems with limited memory. The work presented in this thesis focuses on a compact approach as an alternative to population based optimisation algorithm, suitable for solving real-time dynamic optimisation problems. Specifically, a novel compact dynamic optimisation algorithm suitable for embedded systems with limited memory is presented. Three novel dynamic approaches that augment and enhance the evolving properties of the compact genetic algorithm in dynamic environments are introduced. These are 1.) change detection scheme that measures the degree of dynamic change 2.) mutation schemes whereby the mutation rates is directly linked to the detected degree of change and 3.) change trend scheme the monitors change pattern exhibited by the system. The novel compact dynamic optimization algorithm outlined was applied to two differing dynamic optimization problems. This work evaluates the algorithm in the context of tuning a controller for a physical target system in a dynamic environment and solving a dynamic optimization problem using an artificial dynamic environment generator. The novel compact dynamic optimisation algorithm was compared to some existing dynamic optimisation techniques. Through a series of experiments, it was shown that maintaining diversity at a population level is more efficient than diversity at an individual level. Among the five variants of the novel compact dynamic optimization algorithm, the third variant showed the best performance in terms of response to dynamic changes and solution quality. Furthermore, it was demonstrated that information transfer based on dynamic change patterns can effectively minimize the exploration/exploitation dilemma in a dynamic environment.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Bozsak, Franz. "Optimisation de stents actifs". Phd thesis, Ecole Polytechnique X, 2013. http://pastel.archives-ouvertes.fr/pastel-00858100.

Testo completo
Abstract (sommario):
L'utilisation de stents actifs (DES) a révolutionné le traitement de l'athérosclérose. Le relargage contrôlé de médicaments anti-prolifératifs dans la paroi artérielle (PA) a permis de réduire fortement le taux de resténose intra-stent. Mais le risque de thromboses intra-stents tardives demeure un enjeu majeur des DES en partie lié au retard de cicatrisation de la PA endommagée lors de l'implantation. Cette thèse présente une méthode d'optimisation du design des DES afin d'inhiber la resténose sans affecter la cicatrisation. Pour quantifier la performance des différents designs, un modèle numérique décrivant l'écoulement sanguin et le transport de médicaments dans les artères stentées a été développé. Il prend en compte la structure multi-couches de la PA et les interactions du médicament avec les cellules. Un algorithme d'optimisation est couplé au modèle afin d'identifier les DES optimaux. L'optimisation du temps de relargage ainsi que de la concentration initiale du médicament dans le revêtement du DES ont un effet significatif sur la performance. Lorsque le médicament utilisé est le paclitaxel, les solutions optimales consistent à relarguer le produit à des concentrations nettement inférieures à celles des DES actuels soit pendant quelques heures, soit pendant une durée d'un an. Pour le sirolimus, un relargage lent est nécessaire. Les formes optimales des spires du DES sont toujours allongées mais profilées seulement lorsque le relargage est rapide. Ces résultats permettent d'expliquer en partie les performances des différents DES récents et révèlent un fort potentiel d'amélioration dans la conception des DES par rapport aux dispositifs commerciaux actuels.
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Wendemuth, Andreas. "Optimisation in neural networks". Thesis, University of Oxford, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.386749.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Manthilake, Inoka. "Evolutionary building layout optimisation". Thesis, Loughborough University, 2011. https://dspace.lboro.ac.uk/2134/8337.

Testo completo
Abstract (sommario):
Space layout planning (SLP) is the organisation of functional/living spaces (spatial units-SUs) and corridors/access paths of a building satisfying requirements (e.g. accessibility, adjacency etc.) to achieve design goals (e.g. minimising unutilised space and travelling cost). Out of many ways of arranging SUs, a human designer may consider only a handful of alternatives due to resource limitations (e.g. time and effort). To facilitate this task, decision support for SLP design can be obtained using computer technology. Despite being highly combinatorial, many attempts have been made to automate SLP. However in the majority of these, the SUs are arranged in a fixed building footprint/boundary, which may limit exploration of the entire solution space. Thus, it is aimed to develop a space layout optimisation system that allows SUs to position themselves in a building site to satisfy design goals. The objectives of the research are to: understand architectural SLP and optimisation; assess the need for automation of SLP optimisation; explore methods to formulate the SLP optimisation problem; develop a prototype system to optimise SLP based on building design guidelines, and evaluate performance for its strengths and weaknesses using case studies. As early stages of building design are found to be most e ective in reducing the environmental impact and costs, it is also aimed to make provisions for integrating these aspects in SLP. To address the first three objectives, a literature review was conducted. The main finding of this was the current need for an optimisation tool for SLP. It also revealed that genetic algorithms-GA are widely used and show promise in optimisation. Then, a prototype space layout optimisation system (Sl-Opt) was developed using real-valued GA and was programed in JavaR. Constrained optimisation was employed where adjacency and accessibility needs were modelled as constraints, and the objective was to minimise the spread area of the layout. Following this, using an office layout with 8 SUs, Sl-Opt was evaluated for its performance. Results of the designed experiment and subsequent statistical tests showed that the selected parameters of GA operators influence optimisation collectively. Finally using the best parameter set, strengths and weaknesses of Sl-Opt were evaluated using two case studies: a hospital layout problem with 31 SUs and a problem with 10 non-rectangular SUs. Findings revealed that using the selected GA parameters Sl-Opt can successfully solve small scale problems of about less than 10 SUs. For larger prob- lems, the parameters need to be altered. Case studies also revealed that the system is capable of solving problems with non-rectangular SUs with varied 0rientations. Sl-Opt appear to have potential as a building layout decision support tool, and in addition, integration of other aspects such as energy efficiency and cost is possible.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Astapenko, D. "Automated system design optimisation". Thesis, Loughborough University, 2010. https://dspace.lboro.ac.uk/2134/6863.

Testo completo
Abstract (sommario):
The focus of this thesis is to develop a generic approach for solving reliability design optimisation problems which could be applicable to a diverse range of real engineering systems. The basic problem in optimal reliability design of a system is to explore the means of improving the system reliability within the bounds of available resources. Improving the reliability reduces the likelihood of system failure. The consequences of system failure can vary from minor inconvenience and cost to significant economic loss and personal injury. However any improvements made to the system are subject to the availability of resources, which are very often limited. The objective of the design optimisation problem analysed in this thesis is to minimise system unavailability (or unreliability if an unrepairable system is analysed) through the manipulation and assessment of all possible design alterations available, which are subject to constraints on resources and/or system performance requirements. This thesis describes a genetic algorithm-based technique developed to solve the optimisation problem. Since an explicit mathematical form can not be formulated to evaluate the objective function, the system unavailability (unreliability) is assessed using the fault tree method. Central to the optimisation algorithm are newly developed fault tree modification patterns (FTMPs). They are employed here to construct one fault tree representing all possible designs investigated, from the initial system design specified along with the design choices. This is then altered to represent the individual designs in question during the optimisation process. Failure probabilities for specified design cases are quantified by employing Binary Decision Diagrams (BDDs). A computer programme has been developed to automate the application of the optimisation approach to standard engineering safety systems. Its practicality is demonstrated through the consideration of two systems of increasing complexity; first a High Integrity Protection System (HIPS) followed by a Fire Water Deluge System (FWDS). The technique is then further-developed and applied to solve problems of multi-phased mission systems. Two systems are considered; first an unmanned aerial vehicle (UAV) and secondly a military vessel. The final part of this thesis focuses on continuing the development process by adapting the method to solve design optimisation problems for multiple multi-phased mission systems. Its application is demonstrated by considering an advanced UAV system involving multiple multi-phased flight missions. The applications discussed prove that the technique progressively developed in this thesis enables design optimisation problems to be solved for systems with different levels of complexity. A key contribution of this thesis is the development of a novel generic optimisation technique, embedding newly developed FTMPs, which is capable of optimising the reliability design for potentially any engineering system. Another key and novel contribution of this work is the capability to analyse and provide optimal design solutions for multiple multi-phase mission systems. Keywords: optimisation, system design, multi-phased mission system, reliability, genetic algorithm, fault tree, binary decision diagram
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Wainwright, P. "Optimisation of chromatographic procedures". Thesis, Swansea University, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.639321.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Brennan, Siobhan. "Fuel cell optimisation studies". Thesis, University of Ulster, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267783.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Mohseninia, Mohsen. "Concurrent finite element optimisation". Thesis, University of Hertfordshire, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.358479.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Holdorf, Lopez Rafael. "Optimisation en présence d’incertitudes". Thesis, Rouen, INSA, 2010. http://www.theses.fr/2010ISAM0009.

Testo completo
Abstract (sommario):
L’optimisation est un sujet très important dans tous les domaines. Cependant, parmi toutes les applications de l’optimisation, il est difficile de trouver des exemples de systèmes à optimiser qui ne comprennent pas un certain niveau d'incertitude sur les valeurs de quelques paramètres. Le thème central de cette thèse est donc le traitement des différents aspects de l’optimisation en présence d’incertitudes. Nous commençons par présenter un bref état de l’art des méthodes permettant de prendre en compte les incertitudes dans l’optimisation. Cette revue de la littérature a permis de constater une lacune concernant la caractérisation des propriétés probabilistes du point d’optimum de fonctions dépendant de paramètres aléatoires. Donc, la première contribution de cette thèse est le développement de deux méthodes pour approcher la fonction densité de probabilité (FDP) d’un tel point : la méthode basée sur la Simulation de Monte Carlo et la méthode de projection en dimension finie basée sur l’Approximation par polynômes de chaos. Les résultats numériques ont montré que celle-ci est adaptée à l’approximation de la FDP du point optimal du processus d'optimisation dans les situations étudiées. Il a été montré que la méthode numérique est capable d’approcher aussi des moments d'ordre élevé du point optimal, tels que l’aplatissement et l’asymétrie. Ensuite, nous passons au traitement de contraintes probabilistes en utilisant l’optimisation fiabiliste. Dans ce sujet, une nouvelle méthode basée sur des coefficients de sécurité est développée. Les exemples montrent que le principal avantage de cette méthode est son coût de calcul qui est très proche de celui de l’optimisation déterministe conventionnelle, ce qui permet son couplage avec un algorithme d’optimisation globale arbitraire
The optimization is a very important tool in several domains. However, among its applications, it is hard to find examples of systems to be optimized that do not possess a certain uncertainty level on its parameters. The main goal of this thesis is the treatment of different aspects of the optimization under uncertainty. We present a brief review of the literature on this topic, which shows the lack of methods able to characterize the probabilistic properties of the optimum point of functions that depend on random parameters. Thus, the first main contribution of this thesis is the development of two methods to eliminate this lack: the first is based on Monte Carlo Simulation (MCS) (considered as the reference result) and the second is based on the polynomial chaos expansion (PCE). The validation of the PCE based method was pursued by comparing its results to those provided by the MCS method. The numerical analysis shows that the PCE method is able to approximate the probability density function of the optimal point in all the problems solved. It was also showed that it is able to approximate even high order statistical moments such as the kurtosis and the asymmetry. The second main contribution of this thesis is on the treatment of probabilistic constraints using the reliability based design optimization (RBDO). Here, a new RBDO method based on safety factors was developed. The numerical examples showed that the main advantage of such method is its computational cost, which is very close to the one of the standard deterministic optimization. This fact makes it possible to couple the new method with global optimization algorithms
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Robert, Raymond. "Optimisation d'un ventilateur liquidien". Mémoire, Université de Sherbrooke, 2003. http://savoirs.usherbrooke.ca/handle/11143/1274.

Testo completo
Abstract (sommario):
Le présent mémoire est une contribution au projet de recherche initié au sein du Département de génie mécanique en partenariat avec le Centre Hospitalier Universitaire de Sherbrooke (CHUS). De ce partenariat a émergé l’équipe multidisciplinaire Inolivent regroupant des ingénieurs chercheurs, des cliniciens spécialistes en détresses respiratoires ainsi que des techniciens. L’objectif du projet est d’optimiser le ventilateur liquidien pour des enfants en bas âge, et ce, dans la poursuite des travaux de maîtrise de Meriem Mazouzi. Dans cette perspective, la première phase du travail a consisté à compléter le montage expérimental en y ajoutant les systèmes manquants. Une fois la conception terminée, la partie asservissement a débuté pour rendre fonctionnels ces systèmes. Par la suite, les étapes préparatoires à la ventilation liquidienne totale ont été déterminées avec les intervenants médicaux pour finalement les mettre en place dans un automate programmable industriel (API) commandant le ventilateur. Pour asservir les systèmes, des modèles ont été construits à partir d’équations mathématiques déjà connues dans la littérature. L’écriture des équations s’est ensuite faite sous Simulink, logiciel de simulation numérique faisant partie intégrante de Matlab. De ces modèles, il a été possible de simuler la dynamique des systèmes composants le ventilateur liquidien total de l’équipe Inolivent et de déterminer les réglages optimaux pour les contrôleurs. Les réglages ont été validés en effectuant des essais en laboratoires à l’Université de Sherbrooke permettant ainsi de valider la robustesse et la stabilité des contrôleurs, préalablement aux phases d’essais sur des modèles animaux (des agneaux à terme). L’équipe a développé trois prototypes de ventilateurs liquidiens qui ont tous servi au moins une fois durant différentes expérimentations animales pour effectuer des ventila­tions liquides totales (VLT). Durant ces expérimentations, de nombreux problèmes ont été rencontrés et ont nécessité des réflexions et des remises en question autant du point de vue mécanique que médical. Ce processus a provoqué l’évolution de plusieurs systèmes mécaniques et méthodologies médicales. Ce mémoire sert donc à présenter le travail d’ingénierie qui a été effectué pour obtenir un ventilateur liquidien total fonctionnel, fiable, simple d’utilisation et robuste, permettant à l’équipe Inolivent d’avoir des résultats pertinents et représentatifs lors des expéri­mentations animales effectuées à l’animalerie de la Faculté de Médecine de l’Université de Sherbrooke.
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Patel, Deena. "Optimisation of neonatal ventilation". Thesis, King's College London (University of London), 2014. https://kclpure.kcl.ac.uk/portal/en/theses/optimisation-of-neonatal-ventilation(020793ce-af66-48de-b969-bc0d702a673f).html.

Testo completo
Abstract (sommario):
Background: Infants born prematurely or at term may unfortunately suffer morbidity from ventilator related complications. New ventilation techniques have been developed aimed at reducing that morbidity, but have yet to be fully evaluated. Aim: To optimise the delivery of new techniques using physiological outcome measures. Methods: A series of studies were undertaken. The objectives were: • In prematurely born infants with acute respiratory distress, to determine the optimal level of volume targeted ventilation. • In term and prematurely born infants, to assess the effect on work of breathing of the addition of pressure support (PSV) to synchronised intermittent mandatory ventilation (SIMV) during weaning and then compare the efficacy of PSV to assist control (ACV) in a randomised trial. • To perform in vitro and in vivo assessments of proportional assist ventilation (PAV). • The physiological outcome measures were the transdiaphragmatic pressure time product (PTPdi), respiratory muscle strength, thoracoabdominal asynchrony, tension time index of the diaphragm and assessment of asynchronous events. Results: A volume target of 4ml/kg in comparison to 6ml/kg or no volume targeting resulted in a higher PTPdi (p <0.001). In infants weaning from the ventilator, the PTPdi was 20% lower (p <0.001) during SIMV with PSV in comparison to SIMV alone. No significant difference in the duration of weaning was demonstrated between PSV and ACV. The in vitro PAV study highlighted abnormalities of airway pressure waveform and higher than excepted airway pressures during both elastic and resistive unloading. Conclusions: Low levels of volume targeting even within the ‘physiological’ range significantly increased the work of breathing. A triggered mode supporting all the infant breaths was superior to when a limited number of breaths were supported. When similar inflation times were used, triggered modes supporting all breaths were equally efficacious. Unloading levels affect the efficacy of PAV; these may be determined by using the ventilator calculated respiratory mechanics.
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Neelakantan, Pratap. "Optimisation of CML therapy". Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/28084.

Testo completo
Abstract (sommario):
TKI inhibitors have revolutionised CML therapy and the goals for management have shifted from finding newer therapies to optimising existing treatment approaches. We have tried to optimise CML therapy by identifying poor responders early by molecular monitoring, improve adherence by using self reported adherence and optimise intolerance by actively changing TKIs to overcome side effects. BCR-ABL PCR of < 10% at 3 months and < 1% at 6 months have become an accepted standard after the publication by Marin et al. We tried to combine the two measurements and showed that 3 month milestone predicts poor responders and is sufficient to consider changing therapy and that an additional measurement at 6 months does not add any further value. Most existing methods of determining adherence to medications are financially impossible to replicate on a day to day basis or too labour intensive. We tried to measure adherence by 4 different questionnaire based methods (visual adherence scale, Lu's scale, Haynes method and DAMS scale) and correlate it with clinical responses. We have showed that adherence by all methods correlated with clinical responses and Haynes method which quantifies adherence based on number of doses of medications missed over the last 7 days was the best indicator of adherence amongst all. We further looked at the interactions of daily routine, communication with the physician; access to internet and patients views on taking the medications with adherence to therapy and adherence was shown to be influenced by all of them. Majority of the patients on TKI therapy appeared to be anxious and nearly half of them depressed. Patients with a better QOL had improved adherences. We propose a model based on 4 questions with the most significance on multivariate analysis to be possibly used as a surrogate for adherence methods. It has been shown that intolerance affects adherence and hence outcomes. We have tried to improve intolerance by switching TKI therapy in patients who had attained CCyR and with chronic low grade side effects and showed that the side effects improved and all patients had further improvement in the molecular milestones with deepening responses.
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Luong, Vu Ngoc Duy. "Optimisation for image processing". Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/24904.

Testo completo
Abstract (sommario):
The main purpose of optimisation in image processing is to compensate for missing, corrupted image data, or to find good correspondences between input images. We note that image data essentially has infinite dimensionality that needs to be discretised at certain levels of resolution. Most image processing methods find a suboptimal solution, given the characteristics of the problem. While the general optimisation literature is vast, there does not seem to be an accepted universal method for all image problems. In this thesis, we consider three interrelated optimisation approaches to exploit problem structures of various relaxations to three common image processing problems: 1. The first approach to the image registration problem is based on the nonlinear programming model. Image registration is an ill-posed problem and suffers from many undesired local optima. In order to remove these unwanted solutions, certain regularisers or constraints are needed. In this thesis, prior knowledge of rigid structures of the images is included in the problem using linear and bilinear constraints. The aim is to match two images while maintaining the rigid structure of certain parts of the images. A sequential quadratic programming algorithm is used, employing dimensional reduction, to solve the resulting discretised constrained optimisation problem. We show that pre-processing of the constraints can reduce problem dimensionality. Experimental results demonstrate better performance of our proposed algorithm compare to the current methods. 2. The second approach is based on discrete Markov Random Fields (MRF). MRF has been successfully used in machine learning, artificial intelligence, image processing, including the image registration problem. In the discrete MRF model, the domain of the image problem is fixed (relaxed) to a certain range. Therefore, the optimal solution to the relaxed problem could be found in the predefined domain. The original discrete MRF is NP hard and relaxations are needed to obtain a suboptimal solution in polynomial time. One popular approach is the linear programming (LP) relaxation. However, the LP relaxation of MRF (LP-MRF) is excessively high dimensional and contains sophisticated constraints. Therefore, even one iteration of a standard LP solver (e.g. interior-point algorithm), may take too long to terminate. Dual decomposition technique has been used to formulate a convex-nondifferentiable dual LP-MRF that has geometrical advantages. This has led to the development of first order methods that take into account the MRF structure. The methods considered in this thesis for solving the dual LP-MRF are the projected subgradient and mirror descent using nonlinear weighted distance functions. An analysis of the convergence properties of the method is provided, along with improved convergence rate estimates. The experiments on synthetic data and an image segmentation problem show promising results. 3. The third approach employs a hierarchy of problem's models for computing the search directions. The first two approaches are specialised methods for image problems at a certain level of discretisation. As input images are infinite-dimensional, all computational methods require their discretisation at some levels. Clearly, high resolution images carry more information but they lead to very large scale and ill-posed optimisation problems. By contrast, although low level discretisation suffers from the loss of information, it benefits from low computational cost. In addition, a coarser representation of a fine image problem could be treated as a relaxation to the problem, i.e. the coarse problem is less ill-conditioned. Therefore, propagating a solution of a good coarse approximation to the fine problem could potentially improve the fine level. With the aim of utilising low level information within the high level process, we propose a multilevel optimisation method to solve the convex composite optimisation problem. This problem consists of the minimisation of the sum of a smooth convex function and a simple non-smooth convex function. The method iterates between fine and coarse levels of discretisation in the sense that the search direction is computed using information from either the gradient or a solution of the coarse model. We show that the proposed algorithm is a contraction on the optimal solution and demonstrate excellent performance on experiments with image restoration problems.
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Pattison, Rachel Lesley. "Safety system design optimisation". Thesis, Loughborough University, 2000. https://dspace.lboro.ac.uk/2134/22019.

Testo completo
Abstract (sommario):
This thesis investigates the efficiency of a design optimisation scheme that is appropriate for systems which require a high likelihood of functioning on demand. Traditional approaches to the design of safety critical systems follow the preliminary design, analysis, appraisal and redesign stages until what is regarded as an acceptable design is achieved. For safety systems whose failure could result in loss of life it is imperative that the best use of the available resources is made and a system which is optimal, not just adequate, is produced. The object of the design optimisation problem is to minimise system unavailability through manipulation of the design variables, such that limitations placed on them by constraints are not violated. Commonly, with mathematical optimisation problem; there will be an explicit objective function which defines how the characteristic to be minimised is related to the variables. As regards the safety system problem, an explicit objective function cannot be formulated, and as such, system performance is assessed using the fault tree method. By the use of house events a single fault tree is constructed to represent the failure causes of each potential design to overcome the time consuming task of constructing a fault tree for each design investigated during the optimisation procedure. Once the fault tree has been constructed for the design in question it is converted to a BDD for analysis. A genetic algorithm is first employed to perform the system optimisation, where the practicality of this approach is demonstrated initially through application to a High-Integrity Protection System (HIPS) and subsequently a more complex Firewater Deluge System (FDS). An alternative optimisation scheme achieves the final design specification by solving a sequence of optimisation problems. Each of these problems are defined by assuming some form of the objective function and specifying a sub-region of the design space over which this function will be representative of the system unavailability. The thesis concludes with attention to various optimisation techniques, which possess features able to address difficulties in the optimisation of safety critical systems. Specifically, consideration is given to the use of a statistically designed experiment and a logical search approach.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Shetty, Sandeep Krishnanand. "Optimisation of neonatal ventilation". Thesis, King's College London (University of London), 2018. https://kclpure.kcl.ac.uk/portal/en/theses/optimisation-of-neonatal-ventilation(4bf50e9a-9ef5-41f9-baff-db581cf231d2).html.

Testo completo
Abstract (sommario):
Background: Survival of neonates requiring respiratory support has improved over the last two decades, but unfortunately many suffer morbidity from ventilator related complications. Aim: To undertake a series of studies using physiological measurements as outcomes in infants with evolving or established bronchopulmonary dysplasia (BPD) to test the following hypotheses and carry out a national survey. Hypotheses: Proportional assist ventilation (PAV) compared to assist control ventilation (ACV) would improve oxygenation as assessed by the oxygenation index (OI). Neurally adjusted ventilatory assist (NAVA) compared to ACV would improve oxygenation. Use of heated, humidified, high flow nasal cannula (HHFNC) would not have increased given the results of recent randomised trials. Continuous positive airway pressure (CPAP) would reduce the work of breathing (WOB) and thoraco-abdominal asynchrony (TAA) and improve oxygen saturation (SaO2) compared to HHFNC. Methods: Four studies were undertaken. The OI was calculated from measurement of blood gases and the level of respiratory support. A survey was undertaken of lead practitioners in all UK neonatal units. The WOB was assessed by measurement of the pressure time product of the diaphragm (PTPdi) and TAA using respiratory inductance plethysmography (RIP).
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Joyce, Thomas. "Optimisation and Bayesian optimality". Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/19564.

Testo completo
Abstract (sommario):
This doctoral thesis will present the results of work into optimisation algorithms. We first give a detailed exploration of the problems involved in comparing optimisation algorithms. In particular we provide extensions and refinements to no free lunch results, exploring algorithms with arbitrary stopping conditions, optimisation under restricted metrics, parallel computing and free lunches, and head-to-head minimax behaviour. We also characterise no free lunch results in terms of order statistics. We then ask what really constitutes understanding of an optimisation algorithm. We argue that one central part of understanding an optimiser is knowing its Bayesian prior and cost function. We then pursue a general Bayesian framing of optimisation, and prove that this Bayesian perspective is applicable to all optimisers, and that even seemingly non-Bayesian optimisers can be understood in this way. Specifically we prove that arbitrary optimisation algorithms can be represented as a prior and a cost function. We examine the relationship between the Kolmogorov complexity of the optimiser and the Kolmogorov complexity of it’s corresponding prior. We also extended our results from deterministic optimisers to stochastic optimisers and forgetful optimisers, and we show that uniform randomly selecting a prior is not equivalent to uniform randomly selecting an optimisation behaviour. Lastly we consider what the best way to go about gaining a Bayesian understanding of real optimisation algorithms is. We use the developed Bayesian framework to explore the affects of some common approaches to constructing meta-heuristic optimisation algorithms, such as on-line parameter adaptation. We conclude by exploring an approach to uncovering the probabilistic beliefs of optimisers with a “shattering” method.
Gli stili APA, Harvard, Vancouver, ISO e altri
43

KIM, BYEONG-SAM. "Optimisation des structures inelastiques". Palaiseau, Ecole polytechnique, 1996. http://www.theses.fr/1996EPXX0006.

Testo completo
Abstract (sommario):
Les diverses industriels doivent constamment faire face a de nombreux obstacles pour ameliorer la qualite de leurs structures mais aussi pour reduire leurs couts de production et atteindre une solution numerique. Mais des difficultes importantes sont rencontres dans la modelisation des materiaux et des structures. D'autres difficultes fondamentales apparaissent dans l'analyse des structures. Des difficultes encore plus considerables sont dans les choix optimaux des parametres de conception. Une nouvelle approche, la conception optimisee intelligente des structures, a ete recemment proposes, au lms de l'ecole polytechnique. Pour faire sauter la plupart de ces problemes. Nous avons entrepris ce travail pour montrer les possibilites de cette nouvelle approche. Ce travail vient sur la revue de l'etat actuel des outils d'optimisation disponibles et leur interaction dans les structures frequemment rencontrees dans le genie civil c. A. D. Essentiellement formes de barre-poutres et plaque-coques. Bien qu'il existe de nombreux codes de calculs industriels, nous avons prefere cependant batir notre propre codes de calculs (structure uniquement formee de barres ou poutres/plaque ou coques,) qui bien que limite, nous permet de l'introduire plus facilement dans les divers solutions que nous avons essayees et evaluees dans optimisation des structures inelastiques
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Boulmier, Simon. "Optimisation globale avec LocalSolver". Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM037.

Testo completo
Abstract (sommario):
LocalSolver est un logiciel de programmation mathématique.Originellement pensé pour traiter les grands problèmes d’optimisation combinatoire rencontrés dans l’industrie, son fonctionnement repose sur des heuristiques de recherche locale.Cette approche de résolution pragmatique, couplée à des structures de modélisation expressives, non linéaires et ensemblistes, lui ont permis de s'imposer dans le catalogue des solveurs commerciaux.L'objet de cette thèse est le développement d'une approche duale, complémentaire à la recherche locale, qui fournira des bornes aux problèmes traités.L'intérêt principal est de qualifier la qualité des solutions retournées, voire de prouver leur optimalité, permettant ainsi d'interrompre plus rapidement la résolution.Ce n'est cependant pas le seul, puisque les techniques nécessaires au calcul de bornes permettent par exemple de prouver l'inconsistance d'un problème.Cette fonctionnalité est utile en phase de développement, où des erreurs de modélisation ou de données sont fréquentes.Trois difficultés principales se présentent alors.D'abord, les problèmes traités sont génériques, et peuvent être combinatoires, non linéaires ou encore non différentiables.Ensuite, l'intégration à un logiciel industriel impose un haut niveau de fiabilité et de qualité logicielle, ainsi que la capacité à passer à l'échelle en temps et en mémoire.Enfin, tous les besoins de reformulation doivent être pris en compte en interne, afin de permettre aux utilisateurs de LocalSolver de modéliser leurs problèmes le plus naturellement possible.Ainsi, le module dual implémenté au sein de LocalSolver commence par transformer le problème d'optimisation fourni en un programme non linéaire en variables mixtes (MINLP).Ce programme est représenté sous une forme standard facilitant l'implémentation de divers outils utiles au calcul de bornes : génération de relaxations convexes, techniques de réduction de bornes ou encore actions de emph{presolve}.Ces outils sont ensuite intégrés dans une recherche arborescente de type emph{branch-and-reduce}, qui interagit avec les autres modules de LocalSolver grâce à des techniques de programmation concurrente.Si l'approche décrite ci-dessus est classique, plusieurs spécificités et choix d'implémentation se différentient de l’état de l’art.En effet, les opérateurs mathématiques supportés et la technique de reformulation utilisée permettent de calculer des bornes sur plus de problèmes que les solveurs d'optimisation globale de référence.Ensuite, ces solveurs exploitent principalement des relaxations linéaires, alors que l'un de nos objectifs est de montrer que des relaxations non linéaires peuvent être compétitives.Dans cette optique, nous avons implémenté un solveur non linéaire sur-mesure, dédié au calcul de bornes inférieures d'un problème convexe, et adapté aux relaxations non linéaires utilisées.Enfin, un résultat de dualité sous contraintes de bornes est obtenu.Celui-ci permet d'améliorer la performance du solveur non linéaire et d'y inclure une méthode robuste de détection de l'inconsistance, mais aussi de garantir la fiabilité des bornes inférieures calculées par LocalSolver
LocalSolver is a mathematical programming solver.Originally designed to solve large scale combinatorial optimization problems such as those found in the industry, it mainly relies on local search heuristics.This pragmatic solution approach, coupled with expressive nonlinear and set-based modeling techniques, has allowed LocalSolver to establish itself as a successful commercial solver.The purpose of this thesis is to implement a complementary dual approach for the computation of lower bounds within LocalSolver.The main stake is to qualify the solutions found by the solver and to potentially prove their optimality, thus allowing an early stop of the search.Furthermore, lower bounds have many other applications, such as the detection of inconsistent problems.This is useful in the development phase where modeling errors are frequent.We face three major challenges.First, the problems we address are generic and can be combinatorial, nonlinear or even nonsmooth.Then, the integration to an industrial software requires to produce reliable and high-quality code that can scale in time and memory.Finally, any reformulation need must be managed in-house, to allow LocalSolver's users to model their problems in the easiest way possible.The dual module provided to LocalSolver starts by reformulating the given optimization problem into a mixed-integer nonlinear program (MINLP).This program is stored under a standard form that facilitates the implementation of various techniques aiming at computing lower bounds.Examples of such techniques are the generation of convex relaxations, bound tightening techniques and presolve actions.These building blocks are then integrated into a partitioning scheme called the branch-and-reduce algorithm, and interact with the primal modules thanks to concurrent computing techniques.While this approach remains traditional, several choices and implementation features vary from the state of the art.The operators we support and the reformulation technique we use allow us to compute lower bounds on more problems than most global optimization solvers.These solvers also mainly use linear relaxations, whereas our goal is to show that nonlinear relaxations can be competitive.For this purpose, we implement a nonlinear solver dedicated to the computation of lower bounds to our convex relaxations.At last, we establish a duality result under bound constraints that allow us to improve the performance of our custom nonlinear solver.It is also exploited to certify the validity of the lower bounds computed by LocalSolver and to obtain robust inconsistency certificates
Gli stili APA, Harvard, Vancouver, ISO e altri
45

AMINE, ZOHRA. "Optimisation des machines electriques". Paris 11, 1992. http://www.theses.fr/1992PA112076.

Testo completo
Abstract (sommario):
Apres un rappel de la formulation du probleme d'optimisation des machines electriques et les differentes methodes utilisees dans ce domaine, la methode de synthese et d'optimisation a ete exposee. La these comporte des applications a des machines presentant des problemes differents et faisant appel a des modeles electromagnetiques, thermiques, et mecaniques. Les resultats sont presentes et discutes
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Desmorat, Boris. "Optimisation de structures composites". Cachan, Ecole normale supérieure, 2002. http://www.theses.fr/2002DENS0040.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Radhouane, Ridha. "Optimisation de modems VDSL". Valenciennes, 2000. https://ged.uphf.fr/nuxeo/site/esupversions/63219c61-e8b1-424c-a9aa-4c3502179c78.

Testo completo
Abstract (sommario):
Nous avons effectué cette thèse dans un contexte de transmission numérique DMT-VDSL. Trois sujets ont été traités dans ce cadre ; la transformée de Fourier rapide (FFT), la réduction des interférences VDSL sur les bandes HAMet l'entrelacement convolutionnel. Nous avons proposé une implémentation originale, la FFT flux continu mode mixte (CFMM-FFT), qui optimise la mémoire de traitement de la FFTen la réduisant de trois fois la taille de l'entrée à seulement deux fois. La CFMM-FFT est paramétrable et sans complexité particulière et elle sera exploitée par Texas Instruments dans ses modems DMT-VDSL de la prochaine génération (à savoir, la FDD-DMT-VDSL). Nous avons aussi étudié la suppression de l'énergie de fuite DMT-VDSL dans les bandes radio amateur HAM ou nous avons mis en évidence un phénomène spécifique à la modulation DMT et aux modulations multi-porteuse ; les porteuses physiques de la modulation subissent des rotations (rotation des porteuses physiques RPP) proportionnelles à leurs indices et à la longueur du préfixe cyclique utilisé. Nous avons exploité la RPP pour optimiser la méthode des Dummy tones utilisée pour atténuer la fuite VDSL dans les bandes HAM : nous avons défini les Dummy tones à coefficients complexes qui garantissent des atténuations très proches des -20db (recommandés par l'ETSI et l'ANSI) dans les modems actuels TDD-DMT-VDSL et inférieures à - 20db dans ceux de la prochaine génération (FDD-DMT-VDSL). Le troisième sujet traité est l'optimisation des implémentations matérielles de l'entrelacement convolutionnel. Nous avons proposé une nouvelle méthode simple pour une implémentation économique qui est valable sur une classe particulière de paramètres I (taille du bloc d'entrée) et d (profondeur d'entrelacement). L’utilisation conjointe de l'implémentation proposée avec la méthode de DAVIC, permet de doubler l'étendue des paramètres d'entrelacement I et d et d'offrir plus de souplesse aux systèmes exploitant l'entrelacement.
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Saeed, Nagham. "Intelligent MANET optimisation system". Thesis, Brunel University, 2011. http://bura.brunel.ac.uk/handle/2438/5674.

Testo completo
Abstract (sommario):
In the literature, various Mobile Ad hoc NETwork (MANET) routing protocols proposed. Each performs the best under specific context conditions, for example under high mobility or less volatile topologies. In existing MANET, the degradation in the routing protocol performance is always associated with changes in the network context. To date, no MANET routing protocol is able to produce optimal performance under all possible conditions. The core aim of this thesis is to solve the routing problem in mobile Ad hoc networks by introducing an optimum system that is in charge of the selection of the running routing protocol at all times, the system proposed in this thesis aims to address the degradation mentioned above. This optimisation system is a novel approach that can cope with the network performance’s degradation problem by switching to other routing protocol. The optimisation system proposed for MANET in this thesis adaptively selects the best routing protocol using an Artificial Intelligence mechanism according to the network context. In this thesis, MANET modelling helps in understanding the network performance through different contexts, as well as the models’ support to the optimisation system. Therefore, one of the main contributions of this thesis is the utilisation and comparison of various modelling techniques to create representative MANET performance models. Moreover, the proposed system uses an optimisation method to select the optimal communication routing protocol for the network context. Therefore, to build the proposed system, different optimisation techniques were utilised and compared to identify the best optimisation technique for the MANET intelligent system, which is also an important contribution of this thesis. The parameters selected to describe the network context were the network size and average mobility. The proposed system then functions by varying the routing mechanism with the time to keep the network performance at the best level. The selected protocol has been shown to produce a combination of: higher throughput, lower delay, fewer retransmission attempts, less data drop, and lower load, and was thus chosen on this basis. Validation test results indicate that the identified protocol can achieve both a better network performance quality than other routing protocols and a minimum cost function of 4.4%. The Ad hoc On Demand Distance Vector (AODV) protocol comes in second with a cost minimisation function of 27.5%, and the Optimised Link State Routing (OLSR) algorithm comes in third with a cost minimisation function of 29.8%. Finally, The Dynamic Source Routing (DSR) algorithm comes in last with a cost minimisation function of 38.3%.
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Hagströmer, Björn. "Liquidity and portfolio optimisation". Thesis, Aston University, 2009. http://publications.aston.ac.uk/15679/.

Testo completo
Abstract (sommario):
This thesis presents research within empirical financial economics with focus on liquidity and portfolio optimisation in the stock market. The discussion on liquidity is focused on measurement issues, including TAQ data processing and measurement of systematic liquidity factors (FSO). Furthermore, a framework for treatment of the two topics in combination is provided. The liquidity part of the thesis gives a conceptual background to liquidity and discusses several different approaches to liquidity measurement. It contributes to liquidity measurement by providing detailed guidelines on the data processing needed for applying TAQ data to liquidity research. The main focus, however, is the derivation of systematic liquidity factors. The principal component approach to systematic liquidity measurement is refined by the introduction of moving and expanding estimation windows, allowing for time-varying liquidity co-variances between stocks. Under several liability specifications, this improves the ability to explain stock liquidity and returns, as compared to static window PCA and market average approximations of systematic liquidity. The highest ability to explain stock returns is obtained when using inventory cost as a liquidity measure and a moving window PCA as the systematic liquidity derivation technique. Systematic factors of this setting also have a strong ability in explaining a cross-sectional liquidity variation. Portfolio optimisation in the FSO framework is tested in two empirical studies. These contribute to the assessment of FSO by expanding the applicability to stock indexes and individual stocks, by considering a wide selection of utility function specifications, and by showing explicitly how the full-scale optimum can be identified using either grid search or the heuristic search algorithm of differential evolution. The studies show that relative to mean-variance portfolios, FSO performs well in these settings and that the computational expense can be mitigated dramatically by application of differential evolution.
Gli stili APA, Harvard, Vancouver, ISO e altri
50

See-Toh, Yoong Chiang. "Paints supply chain optimisation". Thesis, Imperial College London, 2008. http://hdl.handle.net/10044/1/7504.

Testo completo
Abstract (sommario):
In production planning for strong seasonal demand products, it is uneconomical to configure the supply chain for throughputs equivalent to the demand peaks. Instead, a holistic approach to supply chain optimisation is adopted where forward demand forecasts drive the production planning process. In this thesis, the medium-term supply chain planning components of forecasting, production planning and evaluation are addressed through studies on a paints production facility. With a large number of specialised products, family-level forecasting is adopted for its simplicity and practicality in applying forecast techniques, coupled with its benefits on the inception of new products into markets. A time-series component was incorporated into traditional clustering techniques for segmenting products into families. The dominant cluster profiles identified are attributed as the seasonal component for the subsequent generation of demand profiles. In multi-purpose batch plants, production planning involves the twin decisions of batch sizing and lot sizing, often performed in series. This campaign is optimised through augmenting the batch sizing operation within a lot-sizing model. In the Mixed Integer Linear Programming model developed here, the degrees of freedom are the monthly batch sizes of each product, integer number of batches of each product produced each month, amount of monthly overtime working and outsourcing required as well as the time-varying inventory positions across the chain. Values for these are selected to balance the trade-offs in batch costs and inventory costs as well as the overtime and outsourcing costs. The final section sees the development of stochastic, dynamic supply chain models to predict the effect of different inventory policies, taking into account forecast accuracy, as derived from clustering. Using Monte Carlo based simulations, the various supply and production decisions are assessed against process manufacturing performance indicators. These planning components are then reconfigured to derive an optimal paints supply chain.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia