Academic literature on the topic 'Global optimisation solution'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Global optimisation solution.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Global optimisation solution"

1

Khurana, M., and H. Winarto. "Development and validation of an efficient direct numerical optimisation approach for aerofoil shape design." Aeronautical Journal 114, no. 1160 (October 2010): 611–28. http://dx.doi.org/10.1017/s0001924000004097.

Full text
Abstract:
Abstract Intelligent shape optimisation architecture is developed, validated and applied in the design of high-altitude long endurance aerofoil (HALE). The direct numeric optimisation (DNO) approach integrating a geometrical shape parameterisation model coupled to a validated flow solver and a population based search algorithm are applied in the design process. The merit of the DNO methodology is measured by computational time efficiency and feasibility of the optimal solution. Gradient based optimisers are not suitable for multi-modal solution topologies. Thus, a novel particle swarm optimiser with adaptive mutation (AM-PSO) is developed. The effect of applying the PARSEC and a modified variant of the original function, as a shape parameterisation model on the global optimal is verified. Optimisation efficiency is addressed by mapping the solution topology for HALE aerofoil designs and by computing the sensitivity of aerofoil shape variables on the objective function. Variables with minimal influence are identified and eliminated from shape optimisation simulations. Variable elimination has a negligible effect on the aerodynamics of the global optima, with a significant reduction in design iterations to convergence. A novel data-mining technique is further applied to verify the accuracy of the AM-PSO solutions. The post-processing analysis, to swarm optimisation solutions, indicates a hybrid optimisation methodology with the integration of global and local gradient based search methods, yields a true optima. The findings are consistent for single and multi-point designs.
APA, Harvard, Vancouver, ISO, and other styles
2

Kianifar, Mohammed Reza, and Felician Campean. "Global Optimisation of Car Front-End Geometry to Minimise Pedestrian Head Injury Levels." Proceedings of the Design Society: International Conference on Engineering Design 1, no. 1 (July 2019): 2873–82. http://dx.doi.org/10.1017/dsi.2019.294.

Full text
Abstract:
AbstractThe paper presents a multidisciplinary design optimisation strategy for car front-end profile to minimise head injury criteria across pedestrian groups. A hybrid modelling strategy was used to simulate the car- pedestrian impact events, combining parametric modelling of front-car geometry with pedestrian models for the kinematics of crash impact. A space filling response surface modelling strategy was deployed to study the head injury response, with Optimal Latin Hypercube (OLH) Design of Experiments sampling and Kriging technique to fit response models. The study argues that the optimisation of the front-end car geometry for each of the individual pedestrian models, using evolutionary optimisation algorithms is not an effective global optimization strategy as the solutions are not acceptable for other pedestrian groups. Collaborative Optimisation (CO) multidisciplinary design optimisation architecture is introduced instead as a global optimisation strategy, and proven that it can enable simultaneous minimisation of head injury levels for all the pedestrian groups, delivering a global optimum solution which meets the safety requirements across the pedestrian groups.
APA, Harvard, Vancouver, ISO, and other styles
3

Shankland, K., T. Griffin, A. Markvardsen, W. David, and J. van de Streek. "Rapid structure solution using global optimisation and distributed computing." Acta Crystallographica Section A Foundations of Crystallography 61, a1 (August 23, 2005): c37. http://dx.doi.org/10.1107/s0108767305098417.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tunay, Mustafa, and Rahib Abiyev. "Improved Hypercube Optimisation Search Algorithm for Optimisation of High Dimensional Functions." Mathematical Problems in Engineering 2022 (April 22, 2022): 1–13. http://dx.doi.org/10.1155/2022/6872162.

Full text
Abstract:
This paper proposes a stochastic search algorithm called improved hypercube optimisation search (HOS+) to find a better solution for optimisation problems. This algorithm is an improvement of the hypercube optimisation algorithm that includes initialization, displacement-shrink and searching area modules. The proposed algorithm has a new random parameters (RP) module that uses two control parameters in order to prevent premature convergence and slow finishing and improve the search accuracy considerable. Many optimisation problems can sometimes cause getting stuck into an interior local optimal solution. HOS+ algorithm that uses a random module can solve this problem and find the global optimal solution. A set of experiments were done in order to test the performance of the algorithm. At first, the performance of the proposed algorithm is tested using low and high dimensional benchmark functions. The simulation results indicated good convergence and much better performance at the lowest of iterations. The HOS+ algorithm is compared with other meta heuristic algorithms using the same benchmark functions on different dimensions. The comparative results indicated the superiority of the HOS+ algorithm in terms of obtaining the best optimal value and accelerating convergence solutions.
APA, Harvard, Vancouver, ISO, and other styles
5

Oppong, Stephen Opoku, Benjamin Ghansah, Evans Baidoo, Wilson Osafo Apeanti, and Daniel Danso Essel. "Experimental Study of Swarm Migration Algorithms on Stochastic and Global Optimisation Problem." International Journal of Distributed Artificial Intelligence 14, no. 1 (January 2022): 1–26. http://dx.doi.org/10.4018/ijdai.296389.

Full text
Abstract:
Complex computational problems are occurrences in our daily lives that needs to be analysed effectively in order to make meaningful and informed decision. This study performs empirical analysis into the performance of six optimisation algorithms based on swarm intelligence on nine well known stochastic and global optimisation problems, with the aim of identifying a technique that returns an optimum output on some selected benchmark techniques. Extensive experiments show that, Multi-Swarm and Pigeon inspired optimisation algorithm outperformed Particle Swarm, Firefly and Evolutionary optimizations in both convergence speed and global solution. The algorithms adopted in this paper gives an indication of which algorithmic solution presents optimal results for a problem in terms of quality of performance, precision and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
6

Purchina, Olga, Anna Poluyan, and Dmitry Fugarov. "The algorithm development based on the immune search for solving unclear problems to detect the optical flow with minimal cost." E3S Web of Conferences 258 (2021): 06052. http://dx.doi.org/10.1051/e3sconf/202125806052.

Full text
Abstract:
The main aim of the research is the development of effective methods and algorithms based on the hybrid principles functioning of the immune system and evolutionary search to determine a global optimal solution to optimisation problems. Artificial immune algorithms are characterised as diverse ones, extremely reliable and implicitly parallel. The integration of modified evolutionary algorithms and immune algorithms is proposed to be used for the solution of above problem. There is no exact method for the efficient solving unclear optimisation problems within the polynomial time. However, by determining close to optimal solutions within the reasonable time, the hybrid immune algorithm (HIA) is capable to offer multiple solutions, which provide compromise between several goals. Quite few researches have been focused on the optimisation of more than one goal and even fewer used to have distinctly considered diversity of solutions that plays fundamental role in good performance of any evolutionary calculation method.
APA, Harvard, Vancouver, ISO, and other styles
7

Amine, Khalil. "Multiobjective Simulated Annealing: Principles and Algorithm Variants." Advances in Operations Research 2019 (May 23, 2019): 1–13. http://dx.doi.org/10.1155/2019/8134674.

Full text
Abstract:
Simulated annealing is a stochastic local search method, initially introduced for global combinatorial mono-objective optimisation problems, allowing gradual convergence to a near-optimal solution. An extended version for multiobjective optimisation has been introduced to allow a construction of near-Pareto optimal solutions by means of an archive that catches nondominated solutions while exploring the feasible domain. Although simulated annealing provides a balance between the exploration and the exploitation, multiobjective optimisation problems require a special design to achieve this balance due to many factors including the number of objective functions. Accordingly, many variants of multiobjective simulated annealing have been introduced in the literature. This paper reviews the state of the art of simulated annealing algorithm with a focus upon multiobjective optimisation field.
APA, Harvard, Vancouver, ISO, and other styles
8

Azmi, Azralmukmin, Samila Mat Zali, Mohd Noor Abdullah, Mohammad Faridun Naim Tajuddin, and Siti Rafidah Abdul Rahim. "The performance of COR optimisation using different constraint handling strategies to solve ELD." Indonesian Journal of Electrical Engineering and Computer Science 17, no. 2 (February 1, 2020): 680. http://dx.doi.org/10.11591/ijeecs.v17.i2.pp680-688.

Full text
Abstract:
This research compares the performance of Competitive Over Resources (COR) optimisation method using a different type of constraint handling strategy to solve the economic load dispatch (ELD) problem. Previously, most research focused on proposing various optimisation techniques using the Penalty Factor Strategy (PFS) to search for a better global optimum. The issue using the penalty factor is that it is difficult to find the correct tune of constant value that influences the algorithm to find the solution. The other technique is using Feasible Solution Strategy (FSS), the idea of which is to locate the infeasible particle to the feasible solution and avoid being trapped by the unsuccessful condition of constraint. This paper investigates the performance of PFS and FSS on the COR optimisation method for solving ELD. Both strategies have been tested on two standard test systems to compare the performance in terms of a global solution, robustness and convergence. The simulation shows that FSS is a better solution compared to PFS.
APA, Harvard, Vancouver, ISO, and other styles
9

Ali, Ahmed F., and Mohamed A. Tawhid. "Direct Gravitational Search Algorithm for Global Optimisation Problems." East Asian Journal on Applied Mathematics 6, no. 3 (July 20, 2016): 290–313. http://dx.doi.org/10.4208/eajam.030915.210416a.

Full text
Abstract:
AbstractA gravitational search algorithm (GSA) is a meta-heuristic development that is modelled on the Newtonian law of gravity and mass interaction. Here we propose a new hybrid algorithm called the Direct Gravitational Search Algorithm (DGSA), which combines a GSA that can perform a wide exploration and deep exploitation with the Nelder-Mead method, as a promising direct method capable of an intensification search. The main drawback of a meta-heuristic algorithm is slow convergence, but in our DGSA the standard GSA is run for a number of iterations before the best solution obtained is passed to the Nelder-Mead method to refine it and avoid running iterations that provide negligible further improvement. We test the DGSA on 7 benchmark integer functions and 10 benchmark minimax functions to compare the performance against 9 other algorithms, and the numerical results show the optimal or near optimal solution is obtained faster.
APA, Harvard, Vancouver, ISO, and other styles
10

Makki, Mohammed, Milad Showkatbakhsh, Aiman Tabony, and Michael Weinstock. "Evolutionary algorithms for generating urban morphology: Variations and multiple objectives." International Journal of Architectural Computing 17, no. 1 (May 29, 2018): 5–35. http://dx.doi.org/10.1177/1478077118777236.

Full text
Abstract:
Morphological variation of urban tissues, which evolve through the optimisation of multiple conflicting objectives, benefit significantly from the application of robust metaheuristic search processes that utilise search and optimisation mechanisms for design problems that have no clear single optimal solution, as well as a solution search space that is too large for a ‘brute-force’ manual approach. As such, and within the context of the experiments presented within this article, the rapidly changing environmental, climatic and demographic global conditions necessitates the utilisation of stochastic search processes for generating design solutions that optimise for multiple conflicting objectives by means of controlled and directed morphological variation within the urban fabric.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Global optimisation solution"

1

Zidani, Hafid. "Représentation de solution en optimisation continue, multi-objectif et applications." Phd thesis, INSA de Rouen, 2013. http://tel.archives-ouvertes.fr/tel-00939980.

Full text
Abstract:
Cette thèse a pour objectif principal le développement de nouveaux algorithmes globaux pour la résolution de problèmes d'optimisation mono et multi-objectif, en se basant sur des formules de représentation ayant la tâche principale de générer des points initiaux appartenant à une zone proche du minimum globale. Dans ce contexte, une nouvelle approche appelée RFNM est proposée et testée sur plusieurs fonctions non linéaires, non différentiables et multimodales. D'autre part, une extension à la dimension infinie a été établie en proposant une démarche pour la recherche du minimum global. Par ailleurs, plusieurs problèmes de conception mécanique, à caractère aléatoire, ont été considérés et résolus en utilisant cette approche, avec amélioration de la méthode multi-objectif NNC. Enfin, une contribution à l'optimisation multi-objectif par une nouvelle approche a été proposée. Elle permet de générer un nombre suffisant de points pour représenter la solution optimale de Pareto.
APA, Harvard, Vancouver, ISO, and other styles
2

Lazare, Arnaud. "Global optimization of polynomial programs with mixed-integer variables." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLY011.

Full text
Abstract:
Dans cette thèse, nous nous intéressons à l'étude des programmes polynomiaux, c'est à dire les problème d'optimisation dont la fonction objectif et/ou les contraintes font intervenir des polynômes de plusieurs variables. Ces problèmes ont de nombreuses applications pratiques et constituent actuellement un champ de recherche très actif. Différentes méthodes permettent de les résoudre de façon exacte ou approchée, en utilisant par exemple des relaxationssemidéfinies positives du type "moments-somme de carrés". Mais ces problèmes restent très difficiles et on ne sait résoudre en toute généralité que des instances de petite taille.Dans le cas quadratique, une approche de résolution exacte efficace a été initialement proposée à travers la méthode QCR. Elle se base sur une reformulation quadratique convexe "optimale" au sens de la borne par relaxation continue.Une des motivations de cette thèse est de généraliser cette approche pour le cas des problèmes polynomiaux. Dans la majeure partie de ce manuscrit, nous étudions les problèmes d'optimisation en variables binaires. Nous proposons deux familles de reformulations convexes pour ces problèmes: des reformulations "directes" et des reformulations passant par la quadratisation.Pour les reformulations directes, nous nous intéressons tout d'abord aux linéarisations. Nous introduisons le concept de q-linéarisation, une linéarisation utilisant q variables additionnelles, et comparons les bornes obtenues par relaxation continue pour différentes valeurs de q. Ensuite, nous appliquons la reformulation convexe au problème polynomial, en ajoutant des termes supplémentaires à la fonction objectif, mais sans ajouter de variables ou de contraintes additionnelles.La deuxième famille de reformulations convexes vise à étendre la reformulation quadratique convexe au cas polynomial. Nous proposons plusieurs nouvelles reformulations alternatives que nous comparons aux méthodes existantes sur des instances de la littérature. En particulier nous présentons l'algorithme PQCR pour résoudre des problèmes polynomiaux binaires sans contrainte. La méthode PQCR permet de résoudre des instances jusqu'ici non résolues. En plus des expérimentations numériques, nous proposons aussi une étude théorique visant à comparer les différentes reformulations quadratiques de la littérature puis à leur appliquer une reformulation convexe.Enfin nous considérons des cas plus généraux et nous proposons une méthode permettant de calculer des relaxations convexes pour des problèmes continus
In this thesis, we are interested in the study of polynomial programs, that is optimization problems for which the objective function and/or the constraints are expressed by multivariate polynomials. These problems have many practical applications and are currently actively studied. Different methods can be used to find either a global or a heuristic solution, using for instance, positive semi-definite relaxations as in the "Moment/Sum of squares" method. But these problems remain very difficult and only small instances are addressed. In the quadratic case, an effective exact solution approach was initially proposed in the QCR method. It is based on a quadratic convex reformulation, which is optimal in terms of continuous relaxation bound.One of the motivations of this thesis is to generalize this approach to the case of polynomial programs. In most of this manuscript, we study optimization problems with binary variables. We propose two families of convex reformulations for these problems: "direct" reformulations and quadratic ones.For direct reformulations, we first focus on linearizations. We introduce the concept of q-linearization, that is a linearization using q additional variables, and we compare the bounds obtained by continuous relaxation for different values of q. Then, we apply convex reformulation to the polynomial problem, by adding additional terms to the objective function, but without adding additional variables or constraints.The second family of convex reformulations aims at extending quadratic convex reformulation to the polynomial case. We propose several new alternative reformulations that we compare to existing methods on instances of the literature. In particular we present the algorithm PQCR to solve unconstrained binary polynomial problems. The PQCR method is able to solve several unsolved instances. In addition to numerical experiments, we also propose a theoretical study to compare the different quadratic reformulations of the literature and then apply a convex reformulation to them.Finally, we consider more general problems and we propose a method to compute convex relaxations for continuous problems
APA, Harvard, Vancouver, ISO, and other styles
3

Bergounioux, Marie. "Analyse de sensibilité d'un problème paramétré en optimisation : étude globale et locale des variations d'une solution." Lille 1, 1985. http://www.theses.fr/1985LIL10126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bocquillon, Benoît. "Contributions à l'autocalibrage des caméras : modélisations et solutions garanties par l'analyse d'intervalle." Toulouse 3, 2008. http://thesesups.ups-tlse.fr/600/.

Full text
Abstract:
Les travaux de cette thèse s'inscrivent dans le cadre de la vision par ordinateur et en particulier de l'autocalibrage des caméras. L'autocalibrage est une phase délicate nécessaire dans de nombreuses applications comme la reconstruction tridimensionnelle ou la métrologie. Par autocalibrage, nous entendons la détermination des paramètres du modèle de la caméra, à partir d'un ensemble d'images et sans connaissance a priori sur la scène. Les méthodes d'autocalibrage ont pris un essor considérable ces dernières années car elles permettent entre autres de s'affranchir de l'utilisation d'une mire de calibrage et de gérer des variations de distance focale. Dans ce contexte, nous avons étudié l'autocalibrage plan (impliquant une scène plane) et l'autocalibrage 3D (impliquant une scène quelconque). Nos principales contributions se situent à la fois au niveau de la modélisation géométrique de ces problèmes et au niveau de leur résolution mathématique. D'une part, au niveau de la modélisation géométrique du problème et concernant l'autocalibrage plan, nous avons mis en évidence un surparamétrage dans le formalisme généralement utilisé et proposé par Triggs en 1998. Nous avons alors proposé un paramétrage minimal, permettant de réduire le nombre d'inconnues et de mieux comprendre le problème. Concernant l'autocalibrage 3D, nous avons réalisé une étude exhaustive des mouvements critiques, c'est-à-dire les mouvements de caméra pour lesquels l'autocalibrage est impossible, dans le cas précis où les paramètres internes sont constants et connus, exceptée la distance focale qui reste inconnue. Bien que Sturm et Kahl aient largement étudié les mouvements critiques de l'autocalibrage avec une distance focale constante ou variable, ce cas n'avait pas encore été abordé. D'autre part, au niveau de la résolution du problème, l'autocalibrage se ramène généralement à un système d'équations algébriques qui se résout par la minimisation d'une fonction de coût. Des méthodes de minimisation locale sont généralement utilisées ; elles nécessitent une bonne estimation initiale et n'offrent aucune garantie sur la solution trouvée (présence de nombreux minima locaux à cause de la non linéarité du problème). .
This work deals with computer vision and more precisely camera self-calibration. Self-calibration is an important step involved in numerous applications such as tridimensional reconstruction or metrology. By self-calibration, we mean estimation of the camera model parameters, from a sequence of images and without a priori knowledge. Self-calibration methods have been widely used these last years since they allow calibration without a calibration target and since they can handle focal length variations. In this context, we have focused on plane-based self-calibration and 3D self-calibration. Our main contributions are concerned with the geometric modelisation of these problems and their mathematical resolution. The first main part of our work deals with geometric modelisation of self-calibration. In the plane-based case, we have revealed an inter-dependence in the model usally used and proposed by Triggs in 1998. In the light of this, we have proposed a minimal parameterization of the problem in which the number of unknowns is reduced. In the 3D case, we provide a thorough study of the critical motion sequences, i. E. Camera motions for which self-calibration is ambiguous, in the constant focal length case. Although Sturm and Kahl have given a complete classification of the critical motion sequences in the variable focal length case, this special case has not been studied yet. Secondly, we have investigated the resolution part of the self-calibration problems. These problems usually lead to an algebraic equation system which is solved by minimizing a cost function. Local minimization methods are generally used. They need a good initial solution and they do not provide any guaranty on the found optimum (many local minima are present, due to the non linearity of the cost function). The calibration step is a crucial step and affects the other steps such as reconstruction. .
APA, Harvard, Vancouver, ISO, and other styles
5

Sellami, Mohamed. "Optimisation et aide au choix de solutions globales fondations-superstructure en construction métallique." Chambéry, 1995. http://www.theses.fr/1995CHAMS001.

Full text
Abstract:
Nous proposons dans cette these le systeme informatique focome pour la conception des postes fondations et superstructure en cm ainsi qu'une methodologie pour l'acquisition de nouvelles connaissances concernant l'optimisation de solutions globales fondations-superstructure. Le systeme focome s'appuie sur une modelisation conceptuelle des informations concernant les trois domaines techniques: le sol, les fondations et la superstructure. La base de donnees commune, et les modules d'evaluations techniques de la structure et des fondations sont implementes avec le langage oriente objet c++. Nous avons dote le systeme d'une methode d'optimisation du cout global des materiaux de la construction (poids d'acier et volume de beton). Cette methode consiste a generer automatiquement toutes les configurations realisables d'une structure en faisant varier certaines variables de conception: types d'appuis, types de nuds, etc et a etablir un diagnostic pour le dimensionnement optimal de ces configurations. Pour acquerir de nouvelles connaissances concernant la recherche de solutions optimales globales fondations-superstructure, le systeme focome peut etre utilise sous forme d'un simulateur: des simulations portant sur les portiques a nefs multiples ont ete effectuees en faisant varier les parametres caracterisant le sol et la superstructure les resultats obtenus ont ete exploites selon une methode adaptee permettant la deduction de regles expertes. Cette methode est basee sur les principes d'analyse multicritere. Les criteres retenus pour cette etude sont la quantite de beton des massifs de fondations et la quantite d'acier de la superstructure metallique
APA, Harvard, Vancouver, ISO, and other styles
6

Bugarin, Florian. "Vision 3D multi-images : contribution à l'obtention de solutions globales par optimisation polynomiale et théorie des moments." Phd thesis, Institut National Polytechnique de Toulouse - INPT, 2012. http://tel.archives-ouvertes.fr/tel-00768723.

Full text
Abstract:
L'objectif général de cette thèse est d'appliquer une méthode d'optimisation polynomiale basée sur la théorie des moments à certains problèmes de vision artificielle. Ces problèmes sont en général non convexes et classiquement résolus à l'aide de méthodes d'optimisation locale. Ces techniques ne convergent généralement pas vers le minimum global et nécessitent de fournir une estimée initiale proche de la solution exacte. Les méthodes d'optimisation globale permettent d'éviter ces inconvénients. L'optimisation polynomiale basée sur la théorie des moments présente en outre l'avantage de prendre en compte des contraintes. Dans cette thèse nous étendrons cette méthode aux problèmes de minimisation d'une somme d'un grand nombre de fractions rationnelles. De plus, sous certaines hypothèses de "faible couplage" ou de "parcimonie" des variables du problème, nous montrerons qu'il est possible de considérer un nombre important de variables tout en conservant des temps de calcul raisonnables. Enfin nous appliquerons les méthodes proposées aux problèmes de vision par ordinateur suivants : minimisation des distorsions projectives induites par le processus de rectification d'images, estimation de la matrice fondamentale, reconstruction 3D multi-vues avec et sans distorsions radiales.
APA, Harvard, Vancouver, ISO, and other styles
7

Schepler, Xavier. "Solutions globales d'optimisation robuste pour la gestion dynamique de terminaux à conteneurs." Thesis, Le Havre, 2015. http://www.theses.fr/2015LEHA0005/document.

Full text
Abstract:
Cette thèse s’intéresse au cas d’un port maritime dans lequel des terminaux à conteneurs coopèrent afin de fournir un meilleur service global. Pour coordonner les opérations entre les terminaux, un modèle et plusieurs méthodes de résolution sont proposés. L’objectif est de minimiser les temps de rotation des navires aux longs cours, des navires caboteurs, des barges fluviales et des trains. Une solution au modèle fournit une affectation des véhicules de transport de conteneurs aux terminaux, ce qui inclue les camions, ainsi qu’une allocation de ressources et des intervalles temporels pour leurs prises en charge et pour celles de leurs conteneurs. Pour obtenir des solutions au modèle, une formulation du problème comme un programme linéaire en variables mixtes est proposée, ainsi que plusieurs heuristiques basées sur la programmation mathématique. Une méthode de planification en horizon glissant est introduite pour la gestion dynamique avec prise en compte des incertitudes. Des expériences numériques sont conduites avec des milliers d’instances réalistes variées, dont les résultats indiquent la viabilité de notre approche. Des résultats démontrent qu’autoriser la coopération entre terminaux augmente significativement la performance du système
This thesis deals with the case of a maritime port in which container terminals are cooperating to provide better global service. In order to coordinate operations between the terminals, a model and several solving methods are proposed. The objective is to minimize turnaround times of mother and feeder vessels, barges and trains. A solution to the model provides an assignment of container-transport vehicles to the terminals, including trucks, as well as an allocation of resources and time intervals to handle them and their containers. To obtain solutions to the model, a mixed-integer programming formulation is provided, as well as several mathematical programming based heuristics. A rolling horizon framework is introduced for dynamic management under uncertainty. Numerical experiments are conducted on thousands of various realistic instances. Results indicate the viability of our approach and demonstrate that allowing cooperation between terminals significantly increases the performance of the system
APA, Harvard, Vancouver, ISO, and other styles
8

Bugarin, Florian. "Vision 3D multi-images : contribution à l’obtention de solutions globales par optimisation polynomiale et théorie des moments." Thesis, Toulouse, INPT, 2012. http://www.theses.fr/2012INPT0068/document.

Full text
Abstract:
L’objectif général de cette thèse est d’appliquer une méthode d’optimisation polynomiale basée sur la théorie des moments à certains problèmes de vision artificielle. Ces problèmes sont en général non convexes et classiquement résolus à l’aide de méthodes d’optimisation locales Ces techniques ne convergent généralement pas vers le minimum global et nécessitent de fournir une estimée initiale proche de la solution exacte. Les méthodes d’optimisation globale permettent d’éviter ces inconvénients. L’optimisation polynomiale basée sur la théorie des moments présente en outre l’avantage de prendre en compte des contraintes. Dans cette thèse nous étendrons cette méthode aux problèmes de minimisation d’une somme d’un grand nombre de fractions rationnelles. De plus, sous certaines hypothèses de "faible couplage" ou de "parcimonie" des variables du problème, nous montrerons qu’il est possible de considérer un nombre important de variables tout en conservant des temps de calcul raisonnables. Enfin nous appliquerons les méthodes proposées aux problèmes de vision par ordinateur suivants : minimisation des distorsions projectives induites par le processus de rectification d’images, estimation de la matrice fondamentale, reconstruction 3D multi-vues avec et sans distorsions radiales
The overall objective of this thesis is to apply a polynomial optimization method, based on moments theory, on some vision problems. These problems are often nonconvex and they are classically solved using local optimization methods. Without additional hypothesis, these techniques don’t converge to the global minimum and need to provide an initial estimate close to the exact solution. Global optimization methods overcome this drawback. Moreover, the polynomial optimization based on moments theory could take into account particular constraints. In this thesis, we extend this method to the problems of minimizing a sum of many rational functions. In addition, under particular assumptions of "sparsity", we show that it is possible to deal with a large number of variables while maintaining reasonable computation times. Finally, we apply these methods to particular computer vision problems: minimization of projective distortions due to image rectification process, Fundamental matrix estimation, and multi-view 3D reconstruction with and without radial distortions
APA, Harvard, Vancouver, ISO, and other styles
9

(13992058), David W. Bulger. "Stochastic global optimisation algorithms." Thesis, 1996. https://figshare.com/articles/thesis/Stochastic_global_optimisation_algorithms/21377646.

Full text
Abstract:

This thesis addresses aspects of stochastic algorithms for the solution of global optimisation problems. The bulk of the research investigates algorithm models of the adaptive search variety. Performances of stochastic and deterministic algorithms are also compared.

Chapter 2 defines pure adaptive search, the prototypical improving region search scheme from the literature. Analyses from the literature of the search duration of pure adaptive search in two specialised situations are recounted and interpreted. In each case pure adaptive search is shown to require an expected search time which varies only linearly with the dimension of the feasible region.

In Chapter 3 a generalisation of pure adaptive search is introduced under the name of hesitant adaptive search. This original algorithm retains the sample point generation mechanism of pure adaptive search, but allows for hesitation, in which an algorithm iteration passes without an improving sample being located. In this way hesitant adaptive search is intended to model more closely practically implementable algorithms. The analysis of the convergence of hesitant adaptive search is more thorough than the analyses already appearing in the literature, as it describes how hesitant adaptive search behaves when given more general objective functions than in previous studies. By specialising to the case of pure adaptive search we obtain a unification of the results appearing in those papers.

Chapter 4 is the most applied of the chapters in this thesis. Here hesitant adaptive search is specialised to describe the convergence behaviour of localisation search schemes. A localisation search scheme produces a bracket of the current improving region at each iteration. The results of Chapter 3 are applied to find necessary and sufficient conditions on the 'tightness' of the brackets to ensure that the dependence of the expected search duration on the dimension of the feasible region is linear, quadratic, cubic, and so forth.

Chapter 5 describes another original generalisation of pure adaptive search, known as fenestral adaptive search. This algorithm generates sample points from a region determined not merely by the previous sample, but by the previous w samples, where w is some prespecified positive integer. The expected search duration of fenestral adaptive search is greater than that of pure adaptive search, but still varies only linearly with the dimension of the feasible region. The sequence of objective function values obtained constitutes an interesting stochastic process, and Chapter 5 is devoted to understanding this process.

Chapter 6 presents a theoretical comparison of the search durations of deterministic and stochastic global optimisation algorithms, together with some discussion of the implications. It is shown that to any stochastic algorithm, there corresponds a deterministic algorithm which requires no more iterations on average, but we discuss why stochastic algorithms may still be more efficient than their deterministic counterparts in practice.

APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Global optimisation solution"

1

Shankland, Kenneth. "Structure Solution: Global Optimisation Methods." In NATO Science for Peace and Security Series B: Physics and Biophysics, 117–24. Dordrecht: Springer Netherlands, 2012. http://dx.doi.org/10.1007/978-94-007-5580-2_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Penev, Kalin. "Precision in High Dimensional Optimisation of Global Tasks with Unknown Solutions." In Large-Scale Scientific Computing, 524–29. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-41032-2_60.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Amine, Khalil. "Insights Into Simulated Annealing." In Advances in Computational Intelligence and Robotics, 121–39. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-2857-9.ch007.

Full text
Abstract:
Simulated annealing is a probabilistic local search method for global combinatorial optimisation problems allowing gradual convergence to a near-optimal solution. It consists of a sequence of moves from a current solution to a better one according to certain transition rules while accepting occasionally some uphill solutions in order to guarantee diversity in the domain exploration and to avoid getting caught at local optima. The process is managed by a certain static or dynamic cooling schedule that controls the number of iterations. This meta-heuristic provides several advantages that include the ability of escaping local optima and the use of small amount of short-term memory. A wide range of applications and variants have hitherto emerged as a consequence of its adaptability to many combinatorial as well as continuous optimisation cases, and also its guaranteed asymptotic convergence to the global optimum.
APA, Harvard, Vancouver, ISO, and other styles
4

Gonzalez-Sanchez, Emilio J., Gottlieb Basch, Julio Roman-Vazquez, Elizabeth Moreno-Blanco, Miguel Angel Repullo-Ruiberriz de Torres, Theodor Friedrich, and Amir Kassam. "Conservation Agriculture in the agri-environmental European context." In Burleigh Dodds Series in Agricultural Science, 149–84. Burleigh Dodds Science Publishing, 2022. http://dx.doi.org/10.19103/as.2021.0088.05.

Full text
Abstract:
Conservation Agriculture is a real sustainable agricultural system, capable of providing solutions for most, if not all, of the agri-environmental challenges in Europe. Indeed, most of the challenges addressed in the Common Agriculture Policy could be faced through Conservation Agriculture. Not only the agri-environmental ones, but also those concerning farmer and rural communities' prosperity. The optimisation of inputs and similar yields than conventional tillage, make Conservation Agriculture a profitable system compared to the tillage-based agriculture. Whereas this sustainable agricultural system was conceived for protecting agricultural soils from its degradation, the numerous collateral benefits that emanate from soil conservation, i.e., climate change mitigation and adaptation, have raised Conservation Agriculture as one of the most studied global agrosciences, being adopted by an increasing number of farmers worldwide, including Europe.
APA, Harvard, Vancouver, ISO, and other styles
5

Faulin, Javier, Fernando Lera-López, and Angel A. Juan. "Optimizing Routes with Safety and Environmental Criteria in Transportation Management in Spain." In Management Innovations for Intelligent Supply Chains, 144–65. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2461-0.ch008.

Full text
Abstract:
The object of logistic management is to optimise the whole value chain of the distribution of goods and merchandise. One of the main aspects of such an analysis is the optimisation of vehicle routes to deliver final products to customers. There are many algorithms to optimise the related vehicle routing problem. The objective function of that problem usually involves distance, cost, number of vehicles, or profits. This study also takes safety and environmental costs into account. Thus, the authors develop variants to traditional heuristic algorithms, in which they include the traditional costs along with safety and environmental cost estimates for real scenarios in Spain. This methodology is called ASEC (Algorithms with Safety and Environmental Criteria). These considerations raise the value of the global objective function, but permit a more realistic cost estimate that includes not only the internal costs involved in the problem but also the related externalities. Finally, real cases are discussed, and solutions are offered using the new ASEC methodology.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Global optimisation solution"

1

Bodin, Per, Kristian Lindqvist, David Seelbinder, Artemi Makarow, José Garrido, Alessandro Visintini, Marilena Di Carlo, Andrew Hyslop, and Valentin Preda. "Attitude Guidance Using On-Board Optimisation." In ESA 12th International Conference on Guidance Navigation and Control and 9th International Conference on Astrodynamics Tools and Techniques. ESA, 2023. http://dx.doi.org/10.5270/esa-gnc-icatt-2023-113.

Full text
Abstract:
The on-board ability to autonomously plan and execute constrained attitude manoeuvres is expected to play an important role in many future space missions. The work presented in this paper summarizes the results from a recently completed ESA study in which such functionality was examined. The study included the application of on-board embedded optimisation techniques to solve constrained attitude guidance problems. Different heritage methods not based on on-board optimisation were also developed and applied for comparison. The study demonstrated the capabilities in a number of test cases associated with two benchmark problems based on the Comet Interceptor and Theseus mission studies. The performance was examined within Monte Carlo simulations as well as within execution on a ZedBoard hardware platform. The use of numerical optimisation has experienced a huge acceleration over the last years. Convex optimisation is appealing mainly since the local optimum is also the global optimum and since very efficient general-purpose and highly dedicated solvers exist. In addition, the dependency on initial guesses is completely lifted. Convex optimisation methods are flexible enough to permit the modelling of a huge number of problems of practical interest. A particularly successful category of problems, the Second-Order Cone Programming (SOCP) problem is a generalization of quadratic programming that includes the possibility of embedding conic constraints in the formulation. This type of formulation has been widely used in many fields, and with particular success in GNC, especially for guidance applications such as powered-descent and landing, and atmospheric re-entry. More notably, this technology has been successfully employed on several rockets and vehicles, including the Falcon 9 and the experimental DLR vehicle EAGLE. Resulting from a trade-off performed in the study, the selected baseline strategy was to combine second-order cone programming (SOCP) with successive convexification into sequential convex programming (SCVX), inspired by the work of Mao and Bonalli. The SOCP problem class perfectly fits the constraints required to model the benchmark scenarios and the DLR experience in using similar methods for vertical takeoff and landing (VTVL) vehicles demonstrates, that it is a reliable, fast convergent method. A pseudospectral discretization method was selected based on prior experience. The heritage methods applied for the Comet Interceptor benchmark case was based on simplified slew strategies parameterized by a reduced parameter set. The solution is obtained numerically taking into account the numerical evolution of the nominal angular and rate profiles. An eigenaxis slew algorithm was chosen for Theseus as a result of a trade-off with alternative heritage guidance methods including artificial potential functions and path planning algorithms, where these methods were rejected due to lack of convergence guarantee and because of their computational complexity. For the Comet Interceptor benchmark cases, the results from the Monte Carlo simulations demonstrate that for the more nominal cases, the results from using the SCVX and heritages solutions are comparable in terms of minimising the time the target object is outside of the field-of-view of the payload instruments. For the contingency cases, the SCVX is clearly better than the heritage solution. For Theseus, the slew times resulting from the SCVX solutions are in general shorter than those obtained from the heritage solution. In addition to the Monte Carlo simulations performed for the benchmark cases, the optimisation algorithms were executed on an ARM-Cortex-based development board (ZedBoard), which is supported by the MATLAB Embedded Coder for C code applications. As the SOCP solver is written in C++ it was not possible to rely on the native support. The following procedure was used to facilitate runtime tests: The Embedded Coder was used to generate code of the SCVX algorithm for the ARM-Cortex architecture, including the loading and reading process of the transcription data. Then the GCC cross-compiler was used to build a standalone executable which was uploaded to the ZedBoard. Several conclusions can be drawn from the work performed in the study with in particular the following main areas are of interest: The performance observed from the SCVX based guidance is in most of the test cases better than the guidance resulting from the heritage solutions. The execution times observed from the tests on the flight like HW are in the range of 30 to 40 s which does not really allow for fast recomputation of the guidance profiles in connection with critical re-configuration of HW or in other cases, where the guidance profile is needed quickly. It is however expected that there is some room for improvement in terms of execution time. An estimate of development effort shows that the application-specific required, recurrent effort is similar for the development of the optimisation-based and the heritage solutions. However, the optimisation-based solution also requires a significant initial, non-recurring effort to develop the necessary numeric optimisation software. This is estimated to be about 5 times as much as the effort for a single mission application, not counting the development of the core convex solver. The observations and conclusions summarized above indicate that choosing an SCVX-based attitude guidance solution is not a “magic” universal tool that seamlessly will solve any problem. Significant effort is needed to be able to arrive at a well-posed and well-tuned problem that allows the optimisation-based framework to provide a solution. However, with such a problem at hand, the framework is able to provide a versatile solution that seems to be able to better utilize the on-board resources and deliver a solution that provides better performance than the heritage approach.
APA, Harvard, Vancouver, ISO, and other styles
2

Abam, Joshua T., Yongchang Pu, and Zhiqiang Hu. "Cost-Effective Optimal Solutions for Steel Catenary Riser Using Artificial Neural Network." In ASME 2022 41st International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/omae2022-79044.

Full text
Abstract:
Abstract Over decades, free-hanging steel catenary riser (SCR) has been regarded as a cost-effective and straightforward system for riser solutions in deepwater and ultra-deepwater by National oil companies NOCs, international oil companies IOCs, and independent players. However, the free-hanging SCR comes with its own challenges, which, if not correctly evaluated during design stages, can result in system failures. The two main challenges facing the free-hanging SCR usage are increased self-weight and high fatigue damage, especially at the touchdown and hang-off points. Fatigue damage is the failure resulting from cyclic stress accumulation over a specific duration. The involvement of technologies has improved the design method for these systems over the years. Two of such innovative contributions include the optimisation algorithm and artificial neural network ANN methods. The aim of this research is to develop an effective optimisation algorithm to search for the global optimal weight of the riser configuration while maintaining its structural integrity. The method involves the combined use of ANN and optimisation genetic algorithms (GA) to determine the optimum solution of the SCR. The GA is used because of its capability in handling a variety of complex nonlinear optimisation problems to ascertain the global optimum solution and its capacity to self-moderate the number of iterations. At the same time, the ANN method is deployed for its accurate prediction of non-linear responses. The deployed techniques have shown to be promising due to the time-saving and less computational cost compared to integrating GA and time-domain FE models. This method was illustrated using a prospective 10-inch internal diameter SCR installed in a 2000m deepwater offshore field off the oil-rich Niger Delta region, Nigeria. The obtained results have shown a 19.10 per cent reduction in the riser weight and a time reduction of 90.83 per cent.
APA, Harvard, Vancouver, ISO, and other styles
3

Abam, Joshua T., Yongchang Pu, and Zhiqiang Hu. "Weight Optimisation of Steel Catenary Riser Using Genetic Algorithm and Finite Element Analysis." In ASME 2022 41st International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/omae2022-79084.

Full text
Abstract:
Abstract Steel catenary risers (SCR) are a good candidate for deepwater exploration due to their relatively low cost, less demand for subsea intervention, and compliance with the floating structures motions. However, SCR deployment in deepwater faces tremendous challenges such as increased top tension and concern over fatigue performance. This paper aims to search for the global optimal solution of a riser system through the application of genetic algorithm (GA) and finite element analysis. The process method will, in turn, automate the design process, with the implementation of an optimisation routing as a critical resource for initiating and finalising the acceptability of a riser system. The method involves the deployment of GA as the optimiser because of its capability in handling a variety of complex non-linear optimisation problems to ascertain the global optimum solution and the capacity to self-moderate the number of iterations. The total structural weight is the objective function, while bursting criteria, buckling (collapse) criteria, buckling propagation criteria, yielding criteria, and fatigue limit state (FLS) criteria are non-linear constraints. To accurately assess the strength requirements, time-domain approach through OrcaFlex is used to predict the dynamic responses and fatigue life/damage of the SCR. An interface between GA in MATLAB and ORCAFLEX has been programmed to exchange data. The wall thickness, length and declination angle at the hanging-off point are chosen as design variables due to their strong influence on the configuration and dynamic responses of a SCR. This method was illustrated using a prospective 10inch SCR installed in a 2000m deepwater offshore field off the coast of the oil-rich Niger Delta region, Nigeria. The obtained results have shown a significant reduction in the riser weight.
APA, Harvard, Vancouver, ISO, and other styles
4

Sethson, Magnus. "Extremal Optimisation Approach to Component Placement in Blood Analysis Equipment." In SICFP’21 The 17:th Scandinavian International Conference on Fluid Power. Linköping University Electronic Press, 2021. http://dx.doi.org/10.3384/ecp182p332.

Full text
Abstract:
This reports present an initial study on generative mechatronic design of equipment for blood analysis where the samples and chemicals are forwarded in thin single millimeter vessels. The system of vessels in the equipment transfers the fluids to different stations where chemical reactions and studies are performed. One of the stations is an optical inspection that requires controllable lighting conditions using an array of LEDs of different types. The focus is on the generative design of the placement and configuration of the LEDs. The placement of the LEDs has been taken as a studying case for the method of Extremal Optimisation (EO) approach to mechatronic design. This method forms an opposing strategy to methods like genetic algorithms and simulated annealing. This is because it discriminate the individual parts or components of the configuration that underperform in a particular aspect instead of the more classical strategy of favouring good configurations from global measures. The presented study also relates to the class of many-objective optimisation methods (MaOP) and originates from the concept of self-organised criticality (SOC). The characteristics of avalanche barrier crossings in the parameter search space is inherited from such systems. The test case used for the evaluation places occupying circles onto a quarter ring domain representing LEDs and circuit board. The fluid vessels are represented by lit up small domains that are also approximated by a circular disc. Some conclusion upon the methods capability to form a valid solution are made. A framework for describing a set of local flaw-improvement rules, called D2FI is introduced.
APA, Harvard, Vancouver, ISO, and other styles
5

Vollmer, Michael, Camille Pedretti, Alexander Ni, and Manfred Wirsum. "Advanced Bottoming Cycle Optimisation for Large Alstom CCPP." In ASME Turbo Expo 2007: Power for Land, Sea, and Air. ASMEDC, 2007. http://dx.doi.org/10.1115/gt2007-27578.

Full text
Abstract:
This paper presents the fundamentals of an evolutionary, thermo-economic plant design methodology, which enables an improved and customer-focused optimization of the bottoming cycle of a large Combined Cycle Power Plant. The new methodology focuses on the conceptual design of the CCPP applicable to the product development and the pre-acquisition phase. After the definition of the overall plant configuration such as the number of gas turbines used, the type of main cooling system and the related fix investment cost, the CCPP is optimized towards any criteria available in the process model (e.g. lowest COE, maximum NPV/IRR, highest net efficiency). In view of the fact that the optimization is performed on a global plant level with a simultaneous hot- and cold- end optimization, the results clearly show the dependency of the HRSG steam parameters and the related steam turbine configuration on the definition of the cold end (Air Cooled Condenser instead of Direct Cooling). Furthermore, competing methods for feedwater preheating (HRSG recirculation, condensate preheating or pegging steam), different HRSG heat exchanger arrangements as well as applicable portfolio components are automatically evaluated and finally selected. The developed process model is based on a fixed superstructure and copes with the full complexity of today’s bottoming cycle configurations as well with any constraints and design rules existing in practice. It includes a variety of component modules that are prescribed with their performance characteristics, design limitations and individual cost. More than 100 parameters are used to directly calculate the overall plant performance and related investment cost. Further definitions on payment schedule, construction time, operation regime and consumable cost results in a full economic life cycle calculation of the CCPP. For the overall optimization the process model is coupled to an evolutionary optimizer, whereas around 60 design parameters are used within predefined bounds. Within a single optimization run more than 100’000 bottoming cycle configurations are calculated in order to find the targeted optimum and thanks to today’s massive parallel computing resources, the solution can be found over night. Due to the direct formulation of the process model, the best cycle configuration is a result provided by the optimizer and can be based on a single-, dual or triple pressure system using non-reheat, reheat or double reheat configuration. This methodology enables to analyze also existing limitations and characteristics of the key components in the process model and assists to initiate new developments in order to constantly increase the value for power plant customers.
APA, Harvard, Vancouver, ISO, and other styles
6

Lourenço, Pedro, Hugo Costa, João Branco, Pierre-Loïc Garoche, Arash Sadeghzadeh, Jonathan Frey, Gianluca Frison, Anthea Comellini, Massimo Barbero, and Valentin Preda. "Verification & validation of optimisation-based control systems: methods and outcomes of VV4RTOS." In ESA 12th International Conference on Guidance Navigation and Control and 9th International Conference on Astrodynamics Tools and Techniques. ESA, 2023. http://dx.doi.org/10.5270/esa-gnc-icatt-2023-155.

Full text
Abstract:
VV4RTOS is an activity supported by the European Space Agency aimed at the development and validation of a framework for the verification and validation of spacecraft guidance, navigation, and control (GNC) systems based on embedded optimisation, tailored to handle different layers of abstraction, from guidance and control (G&C) requirements down to hardware level. This is grounded on the parallel design and development of real-time optimisation-based G&C software, allowing to concurrently identify, develop, consolidate, and validate a set of engineering practices and analysis & verification tools to ensure safe code execution of the designed G&C software as test cases but aimed at streamlining general industrial V&V processes. This paper presents: 1) a review of the challenges and the state-of-the-art of formal verification methods applicable to optimization-based software; 2) the implementation for an embedded application and the analysis from a V&V standpoint of a conic optimization solver; 3) the technical approach devised towards and enhanced V&V process; and 4) experimental results up to processor-in-the-loop tests and conclusions. In general, this activity aims to contribute to the widespread usage of convex optimisation-based techniques across the space industry by 1) augmenting the traditional GNC software Design & Development Verification & Validation (DDVV) methodologies to explicitly address iterative embedded optimisation algorithms that are paramount for the success of new and extremely relevant space applications (from powered landing to active debris removal, from actuator allocation to attitude guidance & control) guaranteeing safe, reliable, repeatable, and accurate execution of the SW; and 2) consolidating the necessary tools for the fast prototyping and qualification of G&C software, grounded on strong theoretical foundations for the solution of convex optimisation problems generated by posing, discretization, convexification, and transcription of nonlinear nonconvex optimal control problems to online-solvable optimisation problems. Sound guidelines are provided for the high-to-low level translation of mission requirements and objectives aiming at their interfacing with verifiable embedded solvers tailored for the underlying hardware and exploiting the structure present in the common optimisation/optimal control problems. To fulfil this mandate, two avenues of research and development were followed: the development of a benchmarking framework with optimisation-based G&C and the improvement of the V&V process – two radical advances with respect to traditional GNC DDVV. On the first topic, the new optimisation-based hierarchy was exploited, from high-level requirements and objectives that can be mathematically posed as optimal control problems, themselves organised in different levels of abstraction, complexity, and time-criticality depending on how close to the actuator level they are. The main line of this work is then focused on the core component of optimisation-based G&C – the optimisation solver – starting with a formal analysis of its mathematical properties that allowed to identify meaningful requirements for V&V, and, concurrently, with a thorough, step-by-step, design and implementation for embedding in a space target board. This application-agnostic analysis and development was associated with the DDVV of specific usecases of optimisation-based G&C for common space applications of growing complexity, exploring different challenges in the form of convex problem complexity (up to second-order cone programs), problem size (model predictive control and trajectory optimization), and nonlinearity (both translation and attitude control problems). The novel V&V approach relies on the combination and exploitation of the two main approaches: classical testing of the global on-board software, and local and compositional, formal, math-driven, verification. While the former sees systems as black boxes, feeding it with comprehensive inputs and analysing statistically the outputs, the latter delves deep into the sub-components of the software, effectively seeing it as white boxes whenever mathematically possible. In between the two approaches lies the optimal path to a thorough, dependable, mathematically sound verification and validation process: local, potentially application-agnostic, validation of the building blocks with respect to mathematical specifications leading up to application-specific testing of global complex systems, this time informed by the results of local validation and testing. The deep analysis of the mathematical properties of the optimisation algorithm allows to derive requirements with increasing complexity (e.g., from “the code implements the proper computations”, to higher level mathematical properties such as optimality, convergence, and feasibility). These are related to quantities of interest that can be both verified resorting to e-ACSL specifications and Frama-C in a C-code implementation of the solver, but also observed in online monitors in Simulink or in post-processing during the model/software-in-the-loop testing. Finally, the activity applies the devised V&V process to the benchmark designs, from model-in-the-loop Monte Carlo testing, followed by autocoding and software-in-the-loop equivalence testing in parallel with the Frama-C runtime analysis, and concluded by processor-in-the-loop testing in a Hyperion on-board computer based around a Xilinx Zynq 7000 SoC.
APA, Harvard, Vancouver, ISO, and other styles
7

Carydias, Peter, and Preben Nielsen. "Towards 2030: Transforming Asset Emissions in Upstream Production Operations." In ADIPEC. SPE, 2023. http://dx.doi.org/10.2118/216006-ms.

Full text
Abstract:
Abstract Production operators urgently need to devise credible decarbonisation strategies for their assets. This is driven by regulatory shifts and commercial goals underscored by several global initiatives. These include: the SEC's mandate for emission disclosures, the GCC's headline greenhouse gas emission reduction targets, the IEA's net zero emissions framework, and the EU's 2030 plan. There are three main levers for integrated gas operators:Optimising existing operations to improve efficiencyDesigning sustainable developments and modificationsRebalancing portfolios towards assets with lower carbon t intensity Large capital projects, such as carbon capture and electrification, are typically proposed as responses to the second. However, assets already in operation often offer quicker returns, with scope to reduce ~10-20% of emissions and need to develop viable roadmaps. Through our work in the integrated oil & gas industry over the past two years, we found that the majority of CO2e operational emissions reduction opportunities can be addressed by discrete improvement programmes on rotating reliability, process optimisation and methane leak reduction. In this paper, we discuss how asset operators can drive their operations, energy transition and digital functions to realise significant CO2e benefits in a scale of months by:Developing a value-driven decarbonisation strategy for operations, identifying benefits in production, cost, and emissions, front-loaded with high impact, rapid 20-30% reduction initiatives that can be realised in the next 1-2 years.Implementing effective and auditable real-time monitoring of plant emissions direct from existing control systems, sensors, and imaging techniquesDriving a program of tech enabled optimisation levers and bringing this into new designs for CCUS, hydrogen and electrification projects for maximum benefits. We are having success collaborating with our client's energy transition and digital functions to solve this problem together. In this paper, we introduce:A managed approach to deriving areas of focus (SCORE – Substitute, Capture, Offset, Reduce and Evaluate)A framework of methodologies, tools, and workshop to identifying value drivers, map client pain points, identify a long list of opportunities, and then focus on the top few to build into opportunity casesValue driver trees and associated solutions to drive down emissions in these areas of focus We introduce three case studies that showcase the power of a value-driven emissions reduction strategy, implementing effective monitoring and deriving a program of tech enabled optimisation levers: A templatised oil & gas turbine efficiency uplift solution to tackle gas driven generation lever for a major upstream producer. Deep reinforcement learning (AI) to optimise power consumption of compressors while meeting nominations and emissions constraints for a major gas transmissions operator. Templatised venting emissions solution for a major onshore unconventional gas producer
APA, Harvard, Vancouver, ISO, and other styles
8

Imamovic, N., and D. J. Ewins. "Automatic Selection of Parameters for Updating Procedures." In ASME 1997 Design Engineering Technical Conferences. American Society of Mechanical Engineers, 1997. http://dx.doi.org/10.1115/detc97/vib-4153.

Full text
Abstract:
Abstract This paper describes a new model updating approach based on the standard sensitivity method combined with a new method for automatic selection of updating parameters. It is no longer necessary to select specific updating parameters and to proceed with a global optimisation using these selected parameters. The new method for automatic selection allows the analyst to select any number of updating parameters. The method is based on an assessment of the contribution for each updating parameter to the rank of the sensitivity matrix, and the final selection of updating parameters is performed iteratively by removing the updating parameters which contribute least so long as the rank of the sensitivity matrix is outside a specified value. This solution procedure is performed at every iteration during updating and this ensures that the sensitivity matrix is well conditioned throughout the whole updating process. The technique can be applied to both the modal- and response-based updating approaches. The application of the method is demonstrated on a number of real engineering structures rather than using simulated experimental data.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhao, Tao, Dan Lee, Kasra Farahani, and Philip Cooper. "Advanced Numerical Simulation to Meet Design Challenges of XHPHT Metallurgically Clad PIP Platform Riser." In ASME 2011 30th International Conference on Ocean, Offshore and Arctic Engineering. ASMEDC, 2011. http://dx.doi.org/10.1115/omae2011-49679.

Full text
Abstract:
Pipe-in-pipe (PIP) systems are proposed for platform risers subjected to extra high pressure high temperature (XHPHT) shut-in condition, to meet the flow assurance and stringent strength and thermal criteria, and to mitigate design issues associated with wet insulation application. To further satisfy the corrosive fluid environments, the inner pipe of the PIP system is metallurgically clad with a Corrosion Resistant Alloy (CRA). These complex design challenges require advanced numerical simulation to correctly capture the complex PIP behaviour and clad-pipe effects in order to avoid overly conservative design, and to provide a robust and optimised solution. The equivalence of CRA clad pipe was investigated and analytically deduced, especially on the thermal expansion behaviour under the XHPHT environments. An advanced numerical simulation based on Finite Element Analysis (FEA) was subsequently carried out. A systematic family of FE models was developed to meet the design complexity, namely: global PIP platform riser model to capture the global behaviour, local PIP centraliser model to address contact behaviour, local bulkhead design model for PIP bulkhead design and optimisation, local girth weld model to address mismatches (high-low misalignment, thickness and material strength). In addition, a modal analysis was conducted based on a PIP model to ensure that the analysis accounts for centralisers, pre-stress and deformation effects. The eigenvalue computing is then used for free span analysis. Due to lack of limit state design codes for pipe bends and the fact that the allowable stress criteria can be overly conservative, a bend collapse capacity deduced from FEA was applied in accordance with DNV local buckling criteria. The analysis procedures developed are outlined and a XHPHT PIP platform riser design is presented. This paper aims to provide a robust solution to aid design by the application of advanced numerical simulation.
APA, Harvard, Vancouver, ISO, and other styles
10

ter Brake, Erik, Andy Clegg, and Frederic Perdrix. "Development of Turbine Access System." In ASME 2013 32nd International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/omae2013-10301.

Full text
Abstract:
The offshore wind industry is growing rapidly and the number of personnel transfers between turbines and work boats is increasing. As this is considered an operation that carries a significant risk, a number of improvements have been developed in the industry. In order to provide a cost effective solution optimised for the safety and convenience of operations, a Turbine Access System (TAS) was developed. The functionality of TAS is based on the principle that the motions of the vessel are actively compensated in heave, roll and pitch in order to provide a stable platform for personnel transfer. This paper presents the technical design of TAS, including the development of the global design, the hydraulic system and the control systems. This covers deriving the general arrangement of the unit, obtaining the required inverse motions of the gangway, the means of hydraulic cylinder activation and developing the control system. The control system comprises cascade control with feed-forward and nonlinear compensation in order to minimise trajectory tracking errors. The methodology to derive high degrees of accuracy and smooth operations is discussed alongside the operational logic, monitoring, fault actions and safety features which are accessed through two touch-panel computers. The first TAS came into operation during the summer of 2012 where it was mounted on a 24m catamaran. The sea trials, offshore commissioning and control system optimisation provided extensive amounts of data on the performance of TAS in waves. The results of the performance in waves has been investigated and documented in order to validate the claims of the functionality of TAS. Further optimisation of the motion compensation principle is discussed where the main focus lies on the increase of the operability range beyond the current standards. By changing the bow fenders of the work boat to rollers that provide hydraulic energy dissipation, the additional imposed pitch damping is expected to reduce the boat motions and increase the TAS operability. The paper discusses the methodology and logic behind the active pitch damping devices and the effect on the operation of TAS. The paper discusses the main technical challenges and solutions to achieve a safe, user friendly and comfortable transfer system for the offshore industry.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography