Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Numerical optimisation.

Rozprawy doktorskie na temat „Numerical optimisation”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Numerical optimisation”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Routley, Paul Richard. "BiCMOS circuit optimisation". Thesis, University of Southampton, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.242271.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Penev, Kalin. "Adaptive search heuristics applied to numerical optimisation". Thesis, Southampton Solent University, 2004. http://ssudl.solent.ac.uk/598/.

Pełny tekst źródła
Streszczenie:
The objective of the research project involves investigation of evolutionary computational methods, in particular analysis of population-based search heuristics, and abstraction of core cognition, which may lead to the design of a novel adaptive search algorithm capable of high performance and reliability. The thesis proposes a novel adaptive heuristic method called Free Search (FS). Free Search can be classified as a population-based evolutionary computational method. It gradually changes a set of solutions until satisfaction of certain criteria. The algorithm operates with real-value numbers. It is designed for continuous or partially discontinuous search space. Free Search harmonizes several advanced ideas, which lead to high overall performance. The study includes exploration of selected population-based evolutionary methods, namely : real-value coded Generic Algorithm BLX-a; Particle Swarm Optimisation (PSO); Ant Colony Optimisation (ACO); and Differential Evolution (DE). Common, substantial for the search purposes features, relationships and events are abstracted from the algorithms analysed. The events abstracted are generalised in a theoretical model of population-based heuristic search. The model supports significantly identification of essential advantages and disadvantages of population-based search algorithms and leads to establishment of a novel concept different from other evolutionary algorithms. Free Search together with GA BLX-a, PSO and DE are applied to heterogenous, numerical, non-linear, non-discrete, optimisation problems. The results are presented and discussed. A comparative analysis demonstrates better overall performance of FS than other explored methods. The capability of FS to cope with all tests illustrates a new quality - adaptation to the problem without concrete or specialised configuration of the search parameters. Free Search is tested aditionally with a hard, non-linear, constrained optimisation problem - the so-called bump problem. FS outperforms other methods applied to that problem. Results achieved from Free Search, currently unapproachable for other search algorithms are presented. Free Search opens a new area for research in the domain of adaptive intelligent systems. It can contribute also in general to Computer Science in the modelling of uncertain individual behaviour.
Style APA, Harvard, Vancouver, ISO itp.
3

Tian, Na. "Novel optimisation methods for numerical inverse problems". Thesis, University of Greenwich, 2011. http://gala.gre.ac.uk/9099/.

Pełny tekst źródła
Streszczenie:
Inverse problems involve the determination of one or more unknown quantities usually appearing in the mathematical formulation of a physical problem. These unknown quantities may be boundary heat flux, various source terms, thermal and material properties, boundary shape and size. Solving inverse problems requires additional information through in-situ data measurements of the field variables of the physical problems. These problems are also ill-posed because the solution itself is sensitive to random errors in the measured input data. Regularisation techniques are usually used in order to deal with the instability of the solution. In the past decades, many methods based on the nonlinear least squares model, both deterministic (CGM) and stochastic (GA, PSO), have been investigated for numerical inverse problems. The goal of this thesis is to examine and explore new techniques for numerical inverse problems. The background theory of population-based heuristic algorithm known as quantum-behaved particle swarm optimisation (QPSO) is re-visited and examined. To enhance the global search ability of QPSO for complex multi-modal problems, several modifications to QPSO are proposed. These include perturbation operation, Gaussian mutation and ring topology model. Several parameter selection methods for these algorithms are proposed. Benchmark functions were used to test the performance of the modified algorithms. To address the high computational cost of complex engineering optimisation problems, two parallel models of the QPSO (master-slave, static subpopulation) were developed for different distributed systems. A hybrid method, which makes use of deterministic (CGM) and stochastic (QPSO) methods, is proposed to improve the estimated solution and the performance of the algorithms for solving the inverse problems. Finally, the proposed methods are used to solve typical problems as appeared in many research papers. The numerical results demonstrate the feasibility and efficiency of QPSO and the global search ability and stability of the modified versions of QPSO. Two novel methods of providing initial guess to CGM with approximated data from QPSO are also proposed for use in the hybrid method and were applied to estimate heat fluxes and boundary shapes. The simultaneous estimation of temperature dependent thermal conductivity and heat capacity was addressed by using QPSO with Gaussian mutation. This combination provides a stable algorithm even with noisy measurements. Comparison of the performance between different methods for solving inverse problems is also presented in this thesis.
Style APA, Harvard, Vancouver, ISO itp.
4

Yang, Yong. "Efficient parallel genetic algorithms applied to numerical optimisation". Thesis, Southampton Solent University, 2008. http://ssudl.solent.ac.uk/631/.

Pełny tekst źródła
Streszczenie:
This research is concerned with the optimisation of multi-modal numerical problems using genetic algorithms (GAs). GAs use randomised operators operating over a population of candidate solutions to generate new points in the search space. As the scale and complexity of target applications increase, run time becomes a major inhibitor. Parallel genetic algorithms (PGAs) have therefore become an important area of research. Coarse-grained implementations are one of the most popular models and many researchers are concerned primarily with this area. The island model was the only one class of parallel genetic algorithm on the coarse-grained processing platform. There are indiscriminate overlaps between sub-populations in the island model even if there is no communication between sub-populations. In order to determine whether the removal of the overlaps between sub-populations is beneficial, the partition model based on domain decomposition was motivated and showed that it can offer superior performance on a number of two dimensional test problems. However the partition model has a certain scalability problem. The main contribution of this thesis is to propose and develop an alternative approach, which replicates the beneficial behaviour of the partition model using more scalable techniques. It operates in a similar manner to the island model, but periodically performs a clustering analysis on each sub-population. The clustering analysis is used to identify regions of the search space in which more than one sub-population are sampling. The overlapping clusters are then merged and redistributed amongst sub-populations so that only one sub-population has samples in that region of the search space. It is shown that these overlaps between sub-populations are identified and removed by the clustering analysis without a priori domain knowledge. The performance of the new approach employing the clustering analysis is then compared with the island model and the partition model on a number of non-trivial problems. These experiments show that the new approach is robust, efficient and effective, and removes the scalability problem that prevents the wide application of the partition model.
Style APA, Harvard, Vancouver, ISO itp.
5

Joubert, N. J. D. "Numerical design optimisation for the Karoo Array Telescope". Thesis, Stellenbosch : University of Stellenbosch, 2009. http://hdl.handle.net/10019.1/2727.

Pełny tekst źródła
Streszczenie:
Thesis (MScEng (Mechanical and Mechatronic Engineering))--University of Stellenbosch, 2009.
Although mass minimisation is an important application within structural optimisation, other applications include: (1) concept generation, (2) concept evaluation, (3) design for structural feasibility and (4) data matching. These applications, except data matching, are discussed and illustrated on a prototype design of the Karoo Array Telescope (KAT) antenna. The KAT passed through the design process and a full scale prototype was built, but was found to be too expensive. A detailed finite element model of the finalised design was considered as a test bed for reducing costs. Size-, shape- and topology optimisation are applied to three KAT components, while considering wind, temperature and gravity loads. Structural- and nonstructural constraints are introduced. Coupling of the structural optimisation code with an external analysis program to include non-structural responses and the parallelisation of the sensitivity calculations are presented. It is shown that if a finite element model is available, it is generally possible to apply structural optimisation to improve an existing design. A reduction of 2673 kg steel was accomplished for the existing KAT components. The total cost saving for the project will be significant, when considering that a large amount of antennas will be manufactured.
Style APA, Harvard, Vancouver, ISO itp.
6

Rahbary, Asr M. A. "Computer assisted machine tool part-program optimisation". Thesis, Coventry University, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.279418.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Fraser-Andrews, G. "Numerical techniques for singular optimal trajectories". Thesis, University of Hertfordshire, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.372080.

Pełny tekst źródła
Streszczenie:
The objectives of the subject-matter of this thesis were to appraise some methods of solving non-singular optimal control problems by their degree of success in tackling four chosen problems and then to try the most promising methods on chosen singular problems. In Part I of this thesis, the chosen problems are attempted by quasilinearisation, two versions of shooting, Miels's method, differential dynamic programming and two versions of parameterisation . Conclusions on the various methods are given. NOC shooting, developed by the Numerical Optimisation Centre of The Hatfield Polytechnic, and constrained optimisation were found to be very useful for non-singular problems. In Part 11, the properties and calculation of possible singular controls are investigated, then the two chosen methods used. It was found that NOC shooting was again very useful, provided the solution structure is known and that constrained parameterisation was invaluable for determining the solution structure and when shooting is impossible. Contributions to knowledge as as follows. In Part I, the relative merits of various methods are displayed, additions are made to the theory of parameterisation, shooting and quasilinearisation, the best solutions known of the chosen problems are produced and choices of optimisation parameters for one chosen problem, the satellite problem, are compared. The satellite problem has dependent state variables and the Maximum Principle is extended in Appendix III to cover this case . In Part II, a thorough survey of the properties of singular controls is given, the calculation of possible singular controls clarified and extended, the utility of the two chosen methods is displayed, the best solutions known of the Goddard problem obtained with improved understanding of transitions in soluti on structures , Cl problem studied with control dependent on the costate variables and singular solution structures found.
Style APA, Harvard, Vancouver, ISO itp.
8

Jones, R. "Numerical optimisation techniques applied to problems in continuum mechanics". Thesis, University of Nottingham, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.378760.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Wang, Tao. "Numerical simulation and optimisation for shot peen forming processes". Thesis, University of Cambridge, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.620031.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Abuladze, Vissarion. "Numerical analysis and shape optimisation of concrete gravity dams". Thesis, London South Bank University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336375.

Pełny tekst źródła
Streszczenie:
The Finite Element and Boundary Element Methods are both well established numerical techniques for analysing a wide range of engineering problems. In the present thesis these numerical techniques are used for obtaining a more realistic picture of various characteristics of concrete gravity dams. The present work addresses the behaviour of gravity dams under static loading, and the developed analysis procedure/computer package can cater for a wide range of dam characteristics including: the three-dimensional behaviour of a gravity dam-foundation-abutments system; the non-linear behaviour of a dam and foundation materials; the sequential construction of a dam and impounding of the reservoir loading on the structure; the effect on stresses of interfaces and joints existing between a dam and its foundation, and in the body of a dam itself; the action of pore water pressure within the foundation, at the dam-foundation interface, and in the body of a gravity dam; etc. Using the purpose written computer package which can cater (in an efficient and accurate way) for the influence of all such factors, mathematical programming methods are, then, used to produce a powerful tool for the shape optimisation of gravity dams leading to safe, functional and economical solutions to the problem. In the course of developing the computer program, much care has been exercised as regards the appropriate selection of the finite element types, mesh configurations and mesh densities, in order to reflect (in an efficient fashion) the variation of stress gradients in the body of a gravity dam. In order to reduce high costs associated with a full three-dimensional analysis, a rather efficient method is developed which enables one to carry out equivalent twodimensional computer runs which will effectively simulate the actual three-dimensional behaviour of gravity dams in, for example, narrow valleys. The proposed approach reduces the dimensionality of an actual problem by one, thus, eliminating the main disadvantage of the finite element method in terms of high solution costs for threedimensional problems. As a result, the proposed method makes the solution procedure highly cost effective. By coupling the finite element-boundary element (FEBE) techniques, which can cater for the material non-linearities in the appropriate regions of the foundation, an attempt is made to by-pass the individual disadvantages of both these numerical techniques. It has, then, been possible to exploit the advantages of reducing the dimensionality of the foundation region by one using the boundary element technique, and, hence, come up with significant savings in terms of computer running times. Anisotropic tangent constitutive models for plain concrete under a general state of biaxial static monotonic loading for, both, plane-stress and plane-strain states of stresses are proposed which are simple in nature, and use data readily available from uniaxial tests. These models have been implemented into the computer program which is, then, used to investigate the influence of the step-by-step construction of the dam and the sequential impoundment of the reservoir loading on the state of stresses. The non-linear program is also used to analyse various characteristics of Bratsk concrete gravity darn (in Russia). The correlations between the numerical results and extensive field measurements on this darn, have been found to be encouraging. Isoparametric quadratic interface finite elements for analysing the darnfoundation interaction problem have also been developed. These elements have zero thickness and are based on an extension of the linear interface elements reported by others. The numerical problems of ill-conditioning (usually associated with zero thickness elements) are critically investigated using test examples, and have been found to be due to inadequate finite element mesh design. Non-linear elastic tangent constitutive models for simulating the shear stress-relative displacement behaviour of interfaces have also been developed, and are used to analyse the effects of including interface elements at the dam-foundation region of contact. It is shown that the inclusion of interface elements in the numerical analyses of the dam-foundation system leads to rather significant changes in the magnitudes of the critical tensile stresses acting at the heel of the dam, which have previously been evaluated (by others) using a rigid dam-foundation interconnection scheme. Effects of pore water pressure, acting as a body force throughout the foundation, the dam-foundation interface and the body of a gravity dam, are also critically studied, with the pore pressure values predicted by seepage analysis. Using an extensive set of numerical studies, a number of previously unresolved issues as regards the influence of pore pressures on the state of stresses are clarified. The effect of drainage on the state of stresses within the body of a dam is investigated, and an insight is also given into the effect of the uplift acting at the lift lines between successive layers of Roller Compacted Concrete (ReC) dams. A shape optimisation procedure for gravity dams based on the penalty function method and a sequential unconstrained minimisation technique is also developed. A number of shape optimisations of idealised gravity dams are carried out in order to compare the numerical results with previously available analytical solutions. The present work also caters for the effects of foundation elasticity and uplift on the optimal shape of a gravity dam. A numerical example is provided covering the shape optimisation of a hollow gravity dam. Finally, the shape optimisation of an actual dam (i.e. Tvishi gravity dam in Georgia) using the presently proposed procedures is carried out with the fmal results compared with those available from the project design team. Wherever possible. numerical outputs have been checked against available small or full scale test data or previously reported closed form solutions. Throughout this thesis very encouraging correlations between the present predictions and such experimental and theoretical data have been obtained.
Style APA, Harvard, Vancouver, ISO itp.
11

Agbede, O. A. "Numerical simulation and optimisation studies of groundwater in northern Nigeria". Thesis, City University London, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.355577.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Cowie, Andrew Richard. "Numerical optimisation of building thermal and energy performance in hospitals". Thesis, University of Leeds, 2017. http://etheses.whiterose.ac.uk/16460/.

Pełny tekst źródła
Streszczenie:
This thesis details the development and testing of a metamodel-based building optimisation methodology dubbed thermal building optimisation tool (T-BOT), designed as an information gathering framework and decision support tool rather than a design automator. Initial samples of building simulations are used to train moving least squares regression (MLSR) meta-models of the design space. A genetic algorithm (GA) is then used to optimise with the dual objectives of minimising time-averaged thermal discomfort and energy use. The optimum trade-off is presented as a Pareto front. Adaptive coupling functionality of the building simulation program ESP-r is used to augment the dynamic thermal model (DTM) with computational fluid dynamics (CFD), allowing local evaluation of thermal comfort within rooms. Furthermore, the disconnect between simulation and optimisation induced by the metamodeling is exploited to lend flexibility to the data gathered in the initial samples. Optimisations can hence be performed for any combination of location, time period, thermal comfort criteria and design variables, from a single set of sample simulations; this was termed a “one sample many optimisations” or OSMO approach. This can present substantial time savings over a comparable direct search optimisation technique. To the author’s knowledge the OSMO approach and adaptive coupling of DTM and CFD are unique among building thermal optimisation (BTO) models. Development and testing was focussed on hospital environments, though the method is potentially applicable to other environments. The program was tested by application to two models, one a theoretical test case and one a case study based on a real hospital building. It was found that variation in spatial location, time period and thermal comfort criteria can result in different optimum conditions, though seasonal variation had a large effect on this. Also the sample size and selection of design variables and their ranges were found to be critical to meta-model fidelity.
Style APA, Harvard, Vancouver, ISO itp.
13

Sim, Lik Fang. "Numerical and experimental optimisation of a high performance heat exchanger". Thesis, Sheffield Hallam University, 2007. http://shura.shu.ac.uk/20362/.

Pełny tekst źródła
Streszczenie:
The aim of this research is to numerically and experimentally scrutinise the thermal performance of a typical heat exchanger fitted in a domestic condensing boiler. The optimisation process considered the pins' geometry (circular pins and elliptical pins), pins' spacing, pitch distance, the pressure drop across the heat chamber and the occurrence of thermal hot spots. The first part of the study focused on the effect of altering the circular pins spacing and pins pitch distance of the heat exchanger. Computational Fluid Dynamics (CFD) is used to scrutinise the thermal performance and the air flow properties of each model by changing these two parameters. In total, 13 circular pin models were investigated. Numerical modelling was used to analyse the performance of each model in three-dimensional computational domain. For comparison, all models shared similar boundary conditions and maintained the same pin height of 35 mm and pin diameter of 8 mm. The results showed that at a given flow rate, the total heat transfer rate is more sensitive to a change in the pins spacing than a change of the pins pitch. The results also showed that an optimum spacing of circular pins can increase the heat transfer rate by up to 10%.The second part of the study, focused on investigating the thermal performance of elliptical pins. Four elliptical pin setups were created to study the thermal performance and the air flow properties. In comparison with circular pins, the simulation results showed that the optimum use of eccentricity of elliptical pins could increase the total energy transfer by up to 23% and reduce the pressure drop by 55%. To validate the acquired CFD results, a Thermal Wind Tunnel (TWT) was designed, built and commissioned. The experimental results showed that the numerical simulation under predicted the circular pin models' core temperatures, but over predicted the elliptical models core temperatures. This effect is due to the default values of the standard k - E transport equations model used in the numerical study. Both numerical and experimental results showed that the elliptical models performed better compared to its circular pins counter parts. The study also showed that heat exchanger optimisation can be carried out within a fixed physical geometry with the effective use of CFD.
Style APA, Harvard, Vancouver, ISO itp.
14

Melliani, Mohamed. "Analyse numérique d'algorithmes proximaux généralisés en optimisation convexe". Rouen, 1997. http://www.theses.fr/1997ROUES030.

Pełny tekst źródła
Streszczenie:
La thèse a pour objet l'étude d'une généralisation de l'algorithme du point proximal en optimisation convexe tant d'un point de vue théorique que numérique. L'équivalent de cette généralisation pour l'algorithme de Tikhonov est également proposé. S'inscrivant, dans un premier temps, dans le cadre de la convergence variationnelle, la méthode proximale généralisée est tout d'abord combinée avec les méthodes des pénalités. Puis, lorsqu'appliquée au problème dual, elle permet d'obtenir de nouvelles méthodes de multiplicateurs, différentes de celles introduites par Eckstein et Teboulle. Ces méthodes des multiplicateurs englobent, en particulier, la méthode de Hestenes et Powell, celle de Rockafellar et certaines des méthodes de Kort et Bertsekas.
Style APA, Harvard, Vancouver, ISO itp.
15

Lee, Wei R. "Computational studies of some static and dynamic optimisation problems". Curtin University of Technology, School of Mathematics and Statistics, 1999. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=10284.

Pełny tekst źródła
Streszczenie:
In this thesis we shall investigate the numerical solutions to several important practical static and dynamic optimization problems in engineering and physics. The thesis is organized as follows.In Chapter 1 a general literature review is presented, including motivation and development of the problems, and existing results. Furthermore, some existing computational methods for optimal control problems are also discussed.In Chapter 2 the design of a semiconductor device is posed as an optimization problem: given an ideal voltage-current (V - I) characteristic, find one or more physical and geometrical parameters so that the V-I characteristic of the device matches the ideal one optimally with respect to a prescribed performance criterion. The voltage-current characteristic of a semiconductor device is governed by a set of nonlinear partial differential equations (PDE), and thus a black-box approach is taken for the numerical solution to the PDEs. Various existing numerical methods are proposed for the solution of the nonlinear optimization problem. The Jacobian of the cost function is ill-conditioned and a scaling technique is thus proposed to stabilize the resulting linear system. Numerical experiments, performed to show the usefulness of this approach, demonstrate that the approach always gives optimal or near-optimal solutions to the test problems in both two and three dimensions.In Chapter 3 we propose an efficient approach to numerical integration in one and two dimensions, where a grid set with a fixed number of vertices is to be chosen so that the error between the numerical integral and the exact integral is minimized. For one dimensional problem two schemes are developed for sufficiently smooth functions based on the mid-point rectangular quadrature rule and the trapezoidal rule respectively, and another method is also developed for integrands which are not ++
sufficiently smooth. For two dimensional problems two schemes are first developed for sufficiently smooth functions. One is based on the barycenter rule on a rectangular partition, while the other is on a triangular partition. A scheme for insufficiently smooth functions is also developed. For illustration, several examples are solved using the proposed schemes, and the numerical results show the effectiveness of the approach.Chapter 4 deals with optimal recharge and driving plans for a battery-powered electric vehicle. A major problem facing battery-powered electric vehicles is in their batteries: weight and charge capacity. Thus a battery-powered electric vehicle only has a short driving range. To travel for a longer distance, the batteries are required to be recharged frequently. In this chapter we construct a model for a battery-powered electric vehicle, in which driving strategy is to be obtained so that the total traveling time between two locations is minimized. The problem is formulated as an unconventional optimization problem. However, by using the control parameterization enhancing transformation (CPET) (see [100]) it is shown that this unconventional optimization is equivalent to a conventional optimal parameter selection problem. Numerical examples are solved using the proposed method.In Chapter 5 we consider the numerical solution to a class of optimal control problems involving variable time points in their cost functions. The CPET is first used to convert the optimal control problem with variable time points into an equivalent optimal control problem with fixed multiple characteristic times (MCT). Using the control parameterization technique, the time horizon is partitioned into several subintervals. Let the partition points also be taken as decision variables. The control functions are approximated by piecewise constant or piecewise linear functions ++
in accordance with these variable partition points. We thus obtain a finite dimensional optimization problem. The CPET transform is again used to convert approximate optimal control problems with variable partition points into equivalent standard optimal control problems with MCT, where the control functions are piecewise constant or piecewise linear functions with pre-fixed partition points. The transformed problems are essentially optimal parameter selection problems with MCT. The gradient formulae are obtained for the objective function as well as the constraint functions with respect to relevant decision variables. Numerical examples are solved using the proposed method.A numerical approach is proposed in Chapter 6 for constructing an approximate optimal feedback control law of a class of nonlinear optimal control problems. In this approach, the state space is partitioned into subdivisions, and the controllers are approximated by a linear combination of the 3rd order B-spline basis functions. Furthermore, the partition points are also taken as decision variables in this formulation. To show the effectiveness of the proposed approach, a two dimensional and a three dimensional examples are solved by the approach. The numerical results demonstrate that the method is superior to the existing methods with fixed partition points.
Style APA, Harvard, Vancouver, ISO itp.
16

Kraitong, Kwanchai. "Numerical modelling and design optimisation of Stirling engines for power production". Thesis, Northumbria University, 2012. http://nrl.northumbria.ac.uk/8100/.

Pełny tekst źródła
Streszczenie:
This research is in the area of Thermal Energy Conversion, more specifically, in the conversion of solar thermal energy. This form of renewable energy can be utilised for production of power by using thermo-mechanical conversion systems – Stirling engines. The advantage of such the systems is in their capability to work on low and high temperature differences which is created by the concentrated solar radiation. To design and build efficient, high performance engines in a feasible period of time it is necessary to develop advanced mathematical models based on thermodynamic analysis which accurately describe heat and mass transfer processes taking place inside machines. The aim of this work was to develop such models, evaluate their accuracy by calibrating them against published and available experimental data and against more advanced three-dimensional Computational Fluid Dynamics models. The refined mathematical models then were coupled to Genetic Algorithm optimisation codes to find a rational set of engine’s design parameters which would ensure the high performance of machines. The validation of the developed Stirling engine models demonstrated that there was a good agreement between numerical results and published experimental data. The new set of design parameters of the engine obtained from the optimisation procedure provides further enhancement of the engine performance. The mathematical modelling and design approaches developed in this study with the use of optimization procedures can be successfully applied in practice for creation of more efficient and advanced Stirling engines for power production.
Style APA, Harvard, Vancouver, ISO itp.
17

Sharkey, Patrick S. "Optimisation of charge-air coolers for vehicular applications using numerical techniques". Thesis, University of Brighton, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.309180.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Pohl, Julien. "Turbine stator well heat transfer and design optimisation using numerical methods". Thesis, University of Leeds, 2016. http://etheses.whiterose.ac.uk/15939/.

Pełny tekst źródła
Streszczenie:
Engine components are commonly exposed to air temperatures exceeding the thermal material limit in order to increase the overall engine performance and to maximise the engine specific fuel consumption. To prevent the overheating of the materials and thus the reduction of the component life, an internal flow system must be designed to cool the critical engine parts and to protect them. As the coolant flow is bled from the compressor and not used for the combustion an important goal is to minimise the amount of coolant in order to optimise the overall engine performance. Predicting the metal temperatures is of paramount importance as they are a major factor in determining the component stresses and lives. In addition, as modern engines operate in ever harsher conditions due to efficiency requirements, the ability to predict thermo-mechanical displacements becomes very relevant: on the one hand, to prevent damage of components due to excessive rubbing, on the other hand, to understand how much air is flowing internally within the secondary air system for cooling and sealing purposes, not only in the design condition but throughout the engine life-span. In order to achieve this aero-engine manufacturers aim to use more and more accurate numerical techniques requiring multi-physics models, including thermo-mechanical finite elements and CFD models, which can be coupled in order to investigate small variations in temperatures and displacements. This thesis shows a practical application and extension of a numerical methodology for predicting conjugate heat transfer. Extensive use is made of FEA (solids) and CFD (fluid) modeling techniques to understand the thermo-mechanical behaviour of a turbine stator well cavity, due to the interaction of cooling air supply with the main annulus. Previous work based on the same rig showed diffculties in matching predictions to thermocouple measurements near the rim seal gap. In this investigation, further use is made of existing measurements of hot running seal clearances in the rig. The structural deflections are applied to the existing model to evaluate the impact in flow interactions and heat transfer. Furthermore, for one test case unsteady CFD simulations are conducted in order to take into account the flow unsteadiness in the heat transfer predictions near the rim. In addition to a baseline test case without net ingestion, a case simulating engine deterioration with net ingestion is validated against the available test data, also taking into account cold and hot running seal clearances. Furthermore an additional geometry with a stationary deflector plate is modelled and validated for the same flow cases. Experiments as well as numerical simulations have shown that due to the deflector plate the cooling flow is fed more directly into the disc boundary layer, allowing more effective use of less cooling air, leading to improved engine efficiency. Therefore, the deflector plate geometry is embedded in a CFD-based automated optimisation loop to further reduce the amount of cooling air. The optimisation strategy concentrates on a flexible design parameterisation of the cavity geometry with deflector plate and its implementation in an automatic 3D meshing system with respect of finally executing an automated design optimisation. Special consideration is given to the flexibility of the parameterisation method in order to reduce design variables to a minimum while also increasing the design space flexibility & generality. The parameterised geometry is optimised using a metamodel-assisted approach based on regressing Kriging in order to identify the optimum position and orientation of the deflector plate inside the cavity. The outcome of the optimisation is validated using the benchmarked FEA-CFD coupling methodology.
Style APA, Harvard, Vancouver, ISO itp.
19

Obayopo, S. O. (Surajudeen Olanrewaju). "Performance enhancement in proton exchange membrane cell - numerical modeling and optimisation". Thesis, University of Pretoria, 2012. http://hdl.handle.net/2263/26247.

Pełny tekst źródła
Streszczenie:
Sustainable growth and development in a society requires energy supply that is efficient, affordable, readily available and, in the long term, sustainable without causing negative societal impacts, such as environmental pollution and its attendant consequences. In this respect, proton exchange membrane (PEM) fuel cells offer a promising alternative to existing conventional fossil fuel sources for transport and stationary applications due to its high efficiency, low-temperature operation, high power density, fast start-up and its portability for mobile applications. However, to fully harness the potential of PEM fuel cells, there is a need for improvement in the operational performance, durability and reliability during usage. There is also a need to reduce the cost of production to achieve commercialisation and thus compete with existing energy sources. The present study has therefore focused on developing novel approaches aimed at improving output performance for this class of fuel cell. In this study, an innovative combined numerical computation and optimisation techniques, which could serve as alternative to the laborious and time-consuming trial-and-error approach to fuel cell design, is presented. In this novel approach, the limitation to the optimal design of a fuel cell was overcome by the search algorithm (Dynamic-Q) which is robust at finding optimal design parameters. The methodology involves integrating the computational fluid dynamics equations with a gradient-based optimiser (Dynamic-Q) which uses the successive objective and constraint function approximations to obtain the optimum design parameters. Specifically, using this methodology, we optimised the PEM fuel cell internal structures, such as the gas channels, gas diffusion layer (GDL) - relative thickness and porosity - and reactant gas transport, with the aim of maximising the net power output. Thermal-cooling modelling technique was also conducted to maximise the system performance at elevated working temperatures. The study started with a steady-state three-dimensional computational model to study the performance of a single channel proton exchange membrane fuel cell under varying operating conditions and combined effect of these operating conditions was also investigated. From the results, temperature, gas diffusion layer porosity, cathode gas mass flow rate and species flow orientation significantly affect the performance of the fuel cell. The effect of the operating and design parameters on PEM fuel cell performance is also more dominant at low operating cell voltages than at higher operating fuel cell voltages. In addition, this study establishes the need to match the PEM fuel cell parameters such as porosity, species reactant mass flow rates and fuel gas channels geometry in the system design for maximum power output. This study also presents a novel design, using pin fins, to enhance the performance of the PEM fuel cell through optimised reactant gas transport at a reduced pumping power requirement for the reactant gases. The results obtained indicated that the flow Reynolds number had a significant effect on the flow field and the diffusion of the reactant gas through the GDL medium. In addition, an enhanced fuel cell performance was achieved using pin fins in a fuel cell gas channel, which ensured high performance and low fuel channel pressure drop of the fuel cell system. It should be noted that this study is the first attempt at enhancing the oxygen mass transfer through the PEM fuel cell GDL at reduced pressure drop, using pin fin. Finally, the impact of cooling channel geometric configuration (in combination with stoichiometry ratio, relative humidity and coolant Reynolds number) on effective thermal heat transfer and performance in the fuel cell system was investigated. This is with a view to determine effective thermal management designs for this class of fuel cell. Numerical results shows that operating parameters such as stoichiometry ratio, relative humidity and cooling channel aspect ratio have significant effect on fuel cell performance, primarily by determining the level of membrane dehydration of the PEM fuel cell. The result showed the possibility of operating a PEM fuel cell beyond the critical temperature ( 80„aC), using the combined optimised stoichiometry ratio, relative humidity and cooling channel geometry without the need for special temperature resistant materials for the PEM fuel cell which are very expensive. In summary, the results from this study demonstrate the potential of optimisation technique in improving PEM fuel cell design. Overall, this study will add to the knowledge base needed to produce generic design information for fuel cell systems, which can be applied to better designs of fuel cell stacks.
Thesis (PhD)--University of Pretoria, 2012.
Mechanical and Aeronautical Engineering
unrestricted
Style APA, Harvard, Vancouver, ISO itp.
20

Hall, James. "Geometry and topology optimisation with Eulerian and Lagrangian numerical fluid models". Thesis, University of Bristol, 2016. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.702216.

Pełny tekst źródła
Streszczenie:
Although design optimisation has been well explored using mesh-based approaches, little work has been performed with meshless simulation. Equally, design optimisation has been well explored for methods capable of representing a single design topology, but much less well explored are methods that allow for optimisation of design geometry and topology. To allow for topology changes in design, a new volume-based parameterisation method is proposed which uses the fraction of volume that is solid in an underlying parameterisation grid as the design variables. This technique can be easily used with a low number of design variables, making it usable with agent based global optimisation methods and black box solvers. Optimisation of a NACA 0012 aerofoil in transonic flow is performed with the new parameterisation method and multi-body aerofoil configurations are obtained for optimisation with supersonic flow. Optimising the design of a pivoting, fluid filled tank, shows that the damping of the tank motions can be affected by the tank geometry, which suggests that the wing fuel tanks can be designed to alleviate the flutter instability. It is shown that the effect of fuel is to raise the flutter boundary so the concept of optimising tank design is explored by optimisation of the external tank geometry and by optimising interior baffle configuration. Orifices for vascular self healing networks in composites are optimised to increase mass flow rate. Additionally, the flow of self healing resin into a representative composite crack geometry is modelled using a smoothed particle hydrodynamics solver which incorporates surface tension. The design of a coastal defence structure is also automated through an optimisation process with the fluid behaviour being modelled by smoothed particle hydrodynamics. These optimisation cases have produced novel designs but also, importantly, demonstrate the versatility of the volume based shape parameterisation and the importance of topological change in fluids optimisation.
Style APA, Harvard, Vancouver, ISO itp.
21

Holovatch, T. "Complex transportation networks : resilience, modelling and optimisation". Thesis, Coventry University, 2011. http://curve.coventry.ac.uk/open/items/eafefd84-ff08-43cf-a544-597ee5e63237/1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Kozlowski, Fryderyk. "Numerical simulation and optimisation of organic light emitting diodes and photovoltaic cells". Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2005. http://nbn-resolving.de/urn:nbn:de:swb:14-1134592504212-65990.

Pełny tekst źródła
Streszczenie:
A numerical model and results for the quantitative simulation of multilayer organic light emitting diode (OLED) and organic solar cell (OSC) are presented. In the model, effects like bipolar charge carrier drift and diffusion with field-dependent mobilities, trapping, dopants, indirect and direct bimolecular recombination, singlet Frenkel exciton diffusion, normal decay and quenching effects are taken into account. For an adequate description of multilayer devices with energetic barriers at interfaces between two adjacent organic layers, thermally assisted charge carrier hopping through the interface, interface recombination, and formation of interface charge transfer (CT) states have been introduced in the model. For the simulation of OSC, the generation of carrier pairs in the mixed layer or at the interface is additionally implemented. The light absorption profile is calculated from optical simulations and used as an input for the electrical simulation. The model is based on three elements: the Poisson equation, the rate equations for charge carriers and the rate equations for singlet Frenkel excitons. These equations are simultaeously solved by spatial and temporal discretisation using the appropriate boundary conditions and electrical parameters. The solution is found when a steady state is reached, as indicated by a constant value of current density. The simulation provides a detailed look into the distribution of electric field and concentration of free and trapped carriers at a particular applied voltage. For organic light emitting diodes, the numerical model helps to analyze the problems of different structures and provides deeper insight into the relevant physical mechanisms involved in device operation. Moreover, it is possible to identify technological problems for certain sets of devices. For instance, we could show that ? in contrast to literature reports - the contact between Alq3 and LiF/Al did not show ohmic behaviour for the series of devices. The role of an additional organic blocking layer between HTL and EML was presented. The explanation for the higher creation efficiency for singlet excitons in the three-layer structure is found in the separation of free holes and electrons accumulating close to the internal interface 1-Naphdata/Alq3. The numerical calculation has demonstrated the importance of controlled doping of the organic materials, which is a way to obtain efficient light emitting diodes with low operating voltage. The experimental results has been reproduced by numerical simulation for a series of OLEDs with different thicknesses of the hole transport layer and emitting layer and for doped emitting layers. The advantages and drawbacks of solar cells based on flat heterojunctions and bulk heterojunctions are analyzed. From the simulations, it can be understood why bulk-heterojunctions typically yield higher photocurrents while flat heterojunctions typically feature higher fill factors. In p-i-n ?structures, p and n are doped wide gap materials and i is a photoactive donor-acceptor blend layer using, e.g,. zinc phthalocyanine as a donor and C60 as an acceptor component. It is found that by introducing trap states, the simulation is able to reproduce the linear dependence of short circuit currents on the light intensity. The apparent light-induced shunt resistance often observed in organic solar cells can also be explained by losses due to trapping and indirect recombination of photogenerated carriers, which we consider a crucial point of our work. However, these two effects, the linear scaling of the photocurrent with light intensity and the apparent photoshunt, could also be reproduced when field-dependent geminate recombination is assumed to play a dominant role. First results that show a temperature independent short circuit photocurrent favour the model based on trap-mediated indirect recombination.
Style APA, Harvard, Vancouver, ISO itp.
23

Sosa, Paz Carlos. "Numerical optimisation methods for power consumption in multi-hop mobile phone networks". Thesis, University of Birmingham, 2010. http://etheses.bham.ac.uk//id/eprint/752/.

Pełny tekst źródła
Streszczenie:
In recent years the importance of multi hop wireless networks has been growing mainly due to key factors such as: no backbone infrastructure or cost of installation are required and the network can be rapidly deployed and configured. A main problem in this kind of networks is to establish an efficient use of the power involved in the communication between network devices. In this dissertation, we present two different mathematical models which represent a multi hop wireless network: The first model considers the joint routing, scheduling and power control for TDMA/CDMA multi hop wireless network systems; minimising the used power to send messages through a multi hop wireless network. In this model we consider the scheduling of the message transmission. The scheduling is essential since it coordinates the transmission between devices in order to reduce the interference caused by the neighborhood devices and the background noise. Interference plays an important role in this kind of networks since the quality of service depends on it. We use two quality indicators: The Signal to Noise Ratio (SNR) and the Signal to Interference Noise Ratio (SINR). The second model considers the joint routing, power control for CDMA multi hop wireless network systems model in which we include two different filters: the Single User Matched Filter (SUMF) and the Minimum Mean Squared Error (MMSE), without considering the scheduling problem. The nature of the set of constraints inherent to both mathematical models is nonconvex, therefore we have a non convex optimisation problem. Since the problem is non convex, the obtained solutions are only local minimisers. We present and prove two theorems which, in general terms, state that at a local minimiser, the capacity over the links is fully exploited. We present different sets of experiments and numerical solution for both mathematical models.
Style APA, Harvard, Vancouver, ISO itp.
24

Hill, David Charles. "A methodology for numerical estimation of physical sediment parameters in coastal waters". Thesis, Bangor University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.484083.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Wise, John Nathaniel. "Inverse modelling and optimisation in numerical groundwater flow models using proportional orthogonal decomposition". Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/97116.

Pełny tekst źródła
Streszczenie:
Thesis (PhD)--Stellenbosch University, 2015.
ENGLISH ABSTRACT: Numerical simulations are widely used for predicting and optimising the exploitation of aquifers. They are also used to determine certain physical parameters, for example soil conductivity, by inverse calculations, where the model parameters are changed until the model results correspond optimally to measurements taken on site. The Richards’ equation describes the movement of an unsaturated fluid through porous media, and is characterised as a non-linear partial differential equation. The equation is subject to a number of parameters and is typically computationally expensive to solve. To determine the parameters in the Richards’ equation, inverse modelling studies often need to be undertaken. In these studies, the parameters of a numerical model are varied until the numerical response matches a measured response. Inverse modelling studies typically require 100’s of simulations, which implies that parameter optimisation in unsaturated case studies is common only in small or 1D problems in the literature. As a solution to overcome the computational expense incurred in inverse modelling, the use of Proper Orthogonal Decomposition (POD) as a Reduced Order Modelling (ROM) method is proposed in this thesis to speed-up individual simulations. An explanation of the Finite Element Method (FEM) is given using the Galerkin method, followed by a detailed explanation of the Galerkin POD approach. In the development of the Galerkin POD approach, the method of reducing matrices and vectors is shown, and the treatment of Neumann and Dirichlet boundary values is explained. The Galerkin POD method is applied to two case studies. The first case study is the Kogelberg site in the Table Mountain Group near Cape Town in South Africa. The response of the site is modelled at one well over the period of 2 years, and is assumed to be governed by saturated flow, making it a linear problem. The site is modelled as a 3D transient, homogeneous site, using 15 layers and ≈ 20000 nodes, using the FEM implemented on the open-source software FreeFem++. The model takes the evapotranspiration of the fynbos vegetation at the site into consideration, allowing the calculation of annual recharge into the aquifer. The ROM is created from high-fidelity responses taken over time at different parameter points, and speed-up times of ≈ 500 are achieved, corresponding to speed-up times found in the literature for linear problems. The purpose of the saturated groundwater model is to demonstrate that a POD-based ROM can approximate the full model response over the entire parameter domain, highlighting the excellent interpolation qualities and speed-up times of the Galerkin POD approach, when applied to linear problems. A second case study is undertaken on a synthetic unsaturated case study, using the Richards’ equation to describe the water movement. The model is a 2D transient model consisting of ≈ 5000 nodes, and is also created using FreeFem++. The Galerkin POD method is applied to the case study in order to replicate the high-fidelity response. This did not yield in any speed-up times, since the full matrices of non-linear problems need to be recreated at each time step in the transient simulation. Subsequently, a method is proposed in this thesis that adapts the Galerkin POD method by linearising the non-linear terms in the Richards’ equation, in a method named the Linearised Galerkin POD (LGP) method. This method is applied to the same 2D synthetic problem, and results in speed-up times in the range of 10 to 100. The adaptation, notably, does not use any interpolation techniques, favouring a code intrusive, but physics-based, approach. While the use of an intrusively linearised POD approach adds to the complexity of the ROM, it avoids the problem of finding kernel parameters typically present in interpolative POD approaches. Furthermore, the interpolation and possible extrapolation properties inherent to intrusive POD-based ROM’s are explored. The good extrapolation properties, within predetermined bounds, of intrusive POD’s allows for the development of an optimisation approach requiring a very small Design of Experiments (DOE) sets (e.g. with improved Latin Hypercube sampling). The optimisation method creates locally accurate models within the parameter space using Support Vector Classification (SVC). The region inside of the parameter space in which the optimiser is allowed to move is called the confidence region. This confidence region is chosen as the parameter region in which the ROM meets certain accuracy conditions. With the proposed optimisation technique, advantage is taken of the good extrapolation characteristics of the intrusive POD-based ROM’s. A further advantage of this optimisation approach is that the ROM is built on a set of high-fidelity responses obtained prior to the inverse modelling study, avoiding the need for full simulations during the inverse modelling study. In the methodologies and case studies presented in this thesis, initially infeasible inverse modelling problems are made possible by the use of the POD-based ROM’s. The speed up times and extrapolation properties of POD-based ROM’s are also shown to be favourable. In this research, the use of POD as a groundwater management tool for saturated and unsaturated sites is evident, and allows for the quick evaluation of different scenarios that would otherwise not be possible. It is proposed that a form of POD be implemented in conventional groundwater software to significantly reduce the time required for inverse modelling studies, thereby allowing for more effective groundwater management.
AFRIKAANSE OPSOMMING: Die Richards vergelyking beskryf die beweging van ’n vloeistof deur ’n onversadigde poreuse media, en word gekenmerk as ’n nie-lineêre parsiële differensiaalvergelyking. Die vergelyking is onderhewig aan ’n aantal parameters en is tipies berekeningsintensief om op te los. Om die parameters in die Richards vergelyking te bepaal, moet parameter optimering studies dikwels onderneem word. In hierdie studies, word die parameters van ’n numeriese model verander totdat die numeriese resultate die gemete resultate pas. Parameter optimering studies vereis in die orde van honderde simulasies, wat beteken dat studies wat gebruik maak van die Richards vergelyking net algemeen is in 1D probleme in die literatuur. As ’n oplossing vir die berekingskoste wat vereis word in parameter optimering studies, is die gebruik van Eie Ortogonale Ontbinding (POD) as ’n Verminderde Orde Model (ROM) in hierdie tesis voorgestel om individuele simulasies te versnel in die optimering konteks. Die Galerkin POD benadering is aanvanklik ondersoek en toegepas op die Richards vergelyking, en daarna is die tegniek getoets op verskeie gevallestudies. Die Galerkin POD metode word gedemonstreer op ’n hipotetiese gevallestudie waarin water beweging deur die Richards-vergelyking beskryf word. As gevolg van die nie-lineêre aard van die Richards vergelyking, het die Galerkin POD metode nie gelei tot beduidende vermindering in die berekeningskoste per simulasie nie. ’n Verdere gevallestudie word gedoen op ’n ware grootskaalse terrein in die Tafelberg Groep naby Kaapstad, Suid-Afrika, waar die grondwater beweging as versadig beskou word. Weens die lineêre aard van die vergelyking wat die beweging van versadigde water beskryf, is merkwaardige versnellings van > 500 in die ROM waargeneem in hierdie gevallestudie. Daarna was die die Galerkin POD metode aangepas deur die nie-lineêre terme in die Richards vergelyking te lineariseer. Die tegniek word die geLineariserde Galerkin POD (LGP) tegniek genoem. Die aanpassing het goeie resultate getoon, met versnellings groter as 50 keer wanneer die ROM met die oorspronklike simulasie vergelyk word. Al maak die tegniek gebruik van verder lineariseering, is die metode nogsteeds ’n fisika-gebaseerde benadering, en maak nie gebruik van interpolasie tegnieke nie. Die gebruik van ’n fisika-gebaseerde POD benaderings dra by tot die kompleksiteit van ’n volledige numeriese model, maar die kompleksiteit is geregverdig deur die merkwaardige versnellings in parameter optimerings studies. Verder word die interpolasie eienskappe, en moontlike ekstrapolasie eienskappe, inherent aan fisika-gebaseerde POD ROM tegnieke ondersoek in die navorsing. In die navorsing word ’n tegniek voorgestel waarin hierdie inherente eienskappe gebruik word om plaaslik akkurate modelle binne die parameter ruimte te skep. Die voorgestelde tegniek maak gebruik van ondersteunende vektor klassifikasie. Die grense van die plaaslik akkurate model word ’n vertrouens gebeid genoem. Hierdie vertrouens gebied is gekies as die parameter ruimte waarin die ROM voldoen aan vooraf uitgekiesde akkuraatheidsvereistes. Die optimeeringsbenadering vermy ook die uitvoer van volledige simulasies tydens die parameter optimering, deur gebruik te maak van ’n ROM wat gebaseer is op die resultate van ’n stel volledige simulasies, voordat die parameter optimering studie gedoen word. Die volledige simulasies word tipies uitgevoer op parameter punte wat gekies word deur ’n proses wat genoem word die ontwerp van eksperimente. Verdere hipotetiese grondwater gevallestudies is onderneem om die LGP en die plaaslik akkurate tegnieke te toets. In hierdie gevallestudies is die grondwater beweging weereens beskryf deur die Richards vergelyking. In die gevalle studie word komplekse en tyd-rowende modellerings probleme vervang deur ’n POD gebaseerde ROM, waarin individuele simulasies merkwaardig vinniger is. Die spoed en interpolasie/ekstrapolasie eienskappe blyk baie gunstig te wees. In hierdie navorsing is die gebruik van verminderde orde modelle as ’n grondwaterbestuursinstrument duidelik getoon, waarin voorsiening geskep word vir die vinnige evaluering van verskillende modellering situasies, wat andersins nie moontlik is nie. Daar word voorgestel dat ’n vorm van POD in konvensionele grondwater sagteware geïmplementeer word om aansienlike versnellings in parameter studies moontlik te maak, wat na meer effektiewe bestuur van grondwater sal lei.
Style APA, Harvard, Vancouver, ISO itp.
26

Brayshaw, Damien. "Use of numerical optimisation to determine on-limit handling behaviour of race cars". Thesis, Cranfield University, 2004. http://hdl.handle.net/1826/4506.

Pełny tekst źródła
Streszczenie:
The aim of this research is to use numerical optimisation to investigate the on-limit behaviour of an open wheel downforce type race car using the best compromise of modelling accuracy and computational effort. The current state of lap simulation methods are identified, and the GG speed diagram is described. The use of constrained optimisation, which is a form of optimal control, is used to develop the methods described in this thesis. A seven degree of freedom vehicle model validated by other researchers is used for method validation purposes, and is extended, where possible, to make the modelling of vehicle components more physically significant, without adversely affecting the computational time. This research suggests a quasi steady state approach that produces a GG speed diagram and circuit simulation tool that is capable of optimising vehicle parameters and subsystems in addition to the prevailing control vector of steer and throttle response. The use of numerical optimisation to optimise the rear differential hydraulic pressure and the roll stiffness distribution to maximise vehicle performance is demonstrated. The optimisation of the rear differential hydraulic pressure showed a very small improvement in vehicle performance in combined high speed braking and cornering, but highlighted the ability of the differential to affect the cornering behaviour of the vehicle. The optimisation of the roll stiffness distribution research showed that a significant improvement in the lateral acceleration capability of the vehicle could be achieved at all vehicle speeds between 20 and 80m/s, especially in combined braking and cornering. In addition, a parameter sensitivity study around a realistic Formula One vehicle setup was conducted, looking at the sensitivity of vehicle mass, yaw inertia, tyres, centre of gravity location and engine torque to vehicle performance. An investigation into the importance of the path finding calculation is also reported.
Style APA, Harvard, Vancouver, ISO itp.
27

Fourie, Jecois. "Numerical optimisation of the gating system of a titanium alloy inlet valve casting". Thesis, Cape Peninsula University of Technology, 2014. http://hdl.handle.net/20.500.11838/1290.

Pełny tekst źródła
Streszczenie:
Dissertation submitted in fulfilment of the requirements for the degree Master of Technology: Mechanical Engineering in the Faculty of Engineering at the Cape Peninsula University of Technology 2014
The research described in this dissertation investigates the feasibility of casting inlet valves for an internal combustion engine using Ti6Al4V alloy. The engine valves operate in an extreme environment under high thermal cycles – this requires a material that can withstand such exposures. Ti6Al4V is the most common titanium alloy with high temperature creep and fatigue resistant behaviour, however, it is not all positive. Ti6Al4V alloy also yields many difficulties with respect to processing especially when the material is cast. It is therefore important to gain a thorough understanding of the pouring and solidification characteristics of this material. The main focus of this work was to investigate and optimise feeding and geometrical parameters to produce valves that are free from defects, especially porosity. An in depth analyses of the parameters that influenced the casting quality was performed, and it was found that casting orientation, inlet feeder geometry, initial and boundary conditions all played a vital role in the final results. These parameters were individually investigated by performing detailed numerical simulations using leading simulation software for each of these cases. For each case, a minimum of ten simulations was performed to accurately determine the effect of the alteration on casting soundness and quality. Furthermore, the relationships (if any) were observed and used in subsequent optimised simulations of an entire investment casting tree. The change of geometric orientation and inlet feeder diameter and angle showed distinct relationships with occurrence of porosity. On the other hand, alteration in the pouring parameters, such as temperature and time, had negligible effect on occurrence or position of porosity in the valve. It was found that investigating individual parameters of simple geometry and then utilising these best-fit results in complex geometry yielded beneficial results that would otherwise not be attainable.
Style APA, Harvard, Vancouver, ISO itp.
28

Mahfoudhi, Marouen. "Numerical optimisation of electron beam physical vapor deposition coatings for arbitrarily shaped surfaces". Thesis, Cape Peninsula University of Technology, 2015. http://hdl.handle.net/20.500.11838/2225.

Pełny tekst źródła
Streszczenie:
Thesis (MTech (Mechanical Engineering))--Cape Peninsula University of Technology.
For the last few decades, methods to improve the engine efficiency and reduce the fuel consumption of jet engines have received increased attention. One of the solutions is to increase the operating temperature in order to increase the exhaust gas temperature, resulting in an increased engine power. However, this approach can be degrading for some engine parts such as turbine blades, which are required to operate in a very hostile environment (at ≈ 90% of their melting point temperature). Thus, an additional treatment must be carried out to protect these parts from corrosion, oxidation and erosion, as well as to maintain the substrate’s mechanical properties which can be modified by the high temperatures to which these parts are exposed. Coating, as the most known protection method, has been used for the last few decades to protect aircraft engine parts. According to Wolfe and Co-workers [1], 75% of all engine components are now coated. The most promising studies show that the thermal barrier coating (TBC) is the best adapted coating system for these high temperature applications. TBC is defined as a fine layer of material (generally ceramic or metallic material or both) directly deposited on the surface of the part In order to create a separation between the substrate and the environment to reduce the effect of the temperature aggression. However, the application of TBCs on surfaces of components presents a challenge in terms of the consistency of the thickness of the layer. This is due to the nature of the processes used to apply these coatings. It has been found that variations in the coating thickness can affect the thermodynamic performance of turbine blades as well as lead to premature damage due to higher thermal gradients in certain sections of the blade. Thus, it is necessary to optimise the thickness distribution of the coating.
Style APA, Harvard, Vancouver, ISO itp.
29

Wise, John Nathaniel. "Inverse modelling and optimisation in numerical groundwater flow models using proper orthogonal decomposition". Thesis, Saint-Etienne, EMSE, 2015. http://www.theses.fr/2015EMSE0773/document.

Pełny tekst źródła
Streszczenie:
Des simulateurs numériques sont couramment utilisés pour la prédiction et l'optimisation de l'exploitation d'aquifères et pour la détermination de paramètres physiques (e.g perméabilité) par calcul inverse. L'équation de Richards, décrit l'écoulement d'un fluide dans un milieu poreux non saturé. C'est une équation aux dérivées partielles non linéaires, dont la résolution numérique en grande dimension 3D est très coûteuse et en particulier pour du calcul inverse.Dans ce travail, une méthode de réduction de modèle (ROM) est proposée par une décomposition orthogonale propre (POD) afin de réduire significativement le temps de calcul, tout maîtrisant la précision. Une stratégie de cette méthode est de remplacer localement dans l'algorithme d'optimisation, le modèle fin par un modèle réduit type POD. La méthode de Petroc-Galerkin POD est d'abord appliquée à l'équation de Richards et testée sur différents cas, puis adaptée en linéarisant les termes non linéaires. Cette adaptation ne fait pas appel à une technique d'interpolation et réduit le temps de calcul d'un facteur [10;100]. Bien qu'elle ajoute de la complexité du ROM, cette méthode évite d'avoir à ajuster les paramètres du noyau, comme c'est le cas dans les méthodes du POD par interpolation. Une exploration des propriétés d'interpolation et d'extrapolation inhérentes aux méthodes intrusives est ensuite faite. Des qualités d'extrapolation intéressantes permettent de développer une méthode d'optimisation nécessitant de petits plans d'expériences (DOE). La méthode d'optimisation recrée localement des modèles précis sur l'espace des paramètres, en utilisant une classification à vecteurs de support non linéaire pour délimiter la zone où le modèle est suffisamment précis, la région de confiance. Les méthodes sont appliquées sur un cas d'école en milieu non saturé régit par l'équation de Richards, ainsi que sur un aquifère situé dans le "Table Mountain Group" près de la ville du Cap en Afrique du Sud
The Richards equation describes the movement of an unsaturated fluid through a porous media, and is characterised as a non-linear partial differential equation. The equation is subject to a number of parameters and is typically computationnaly expensive to solve. To determine the parameters in the Richards equation, inverse modelling studies often need to be undertaken. As a solution to overcome the computational expense incurred in inverse modelling, the use of Proper Orthogonal Decomposition (POD) as a Reduced Order Modelling (ROM) method is proposed in this thesis to speed-up individual simulations. The Petrov-Galerkin POD approach is initially applied to the Richards equation and tested on different case studies. However, due to the non-linear nature of the Richards equation the method does not result in significant speed up times. Subsquently, the Petrov-Galerkin method is adapted by linearising the nonlinear terms in the equation, resulting in speed-up times in the range of [10,100]., The adaptation, notably, does not use any interpolation techniques, favouring an intrusive, but physics-based, approach. While the use of intrusive POD approaches add to the complexity of the ROM, it avoids the problem of finding kernel parameters typically present in interpolative POD approaches. Furthermore, the interpolative and possible extrapolation properties inherent to intrusive PODROM's are explored. The good extrapolation propertie, within predetermined bounds, of intrusive POD's allows for the development of an optimisation approach requiring a very small Design of Experiments (DOE). The optimisation method creates locally accurate models within the parameters space usign Support Vector Classification. The limits of the locally accurate model are called the confidence region. The methods are demonstrated on a hypothetical unsaturated case study requiring the Richards equation, and on true case study in the Table Mountain Group near Cape Town, South Africa
Style APA, Harvard, Vancouver, ISO itp.
30

Rambalee, Prevlen. "Identification of desired operational spaces via numerical methods". Diss., University of Pretoria, 2012. http://hdl.handle.net/2263/25314.

Pełny tekst źródła
Streszczenie:
Plant efficiency and profitability are becoming increasingly important and operating at the most optimal point is a necessity. The definition of proper operational bounds on output variables such as product quality, production rates etc., is critical for plant optimisation. The use of operational bounds that do not lie within the region of the output operational space of the plant can result in the control system attempting to operate the plant in a non attainable region. The use of operational bounds that lie within the bounds of the output operational space of the plant and if the output operational space is non convex can also result in the control system attempting to operate the plant in a non attainable region. This results in non feasible optimisation. A numerical intersection algorithm has been developed that identifies the feasible region of operation known as the desired operational space. This is accomplished by finding the intersection of the required operational space and the achievable output operational space. The algorithm was simulated and evaluated on a case study under various scenarios. These scenarios included specifying operational bounds that lie partially within the bounds of the achievable operational space and also specifying operational bounds that lie within the bounds of the operational space which was non convex. The results yielded a desired operational space with bounds that were guaranteed to lie within an attainable region on the output operational space. The desired operational space bounds were also simplified into a rectangle with high and low limits that can be readily used in control systems.
Dissertation (MEng)--University of Pretoria, 2012.
Chemical Engineering
unrestricted
Style APA, Harvard, Vancouver, ISO itp.
31

Van, der Merwe Helena. "Development of a numerical tool for the optimisation of vascular prosthesis towards physiological compliance". Master's thesis, University of Cape Town, 2007. http://hdl.handle.net/11427/3479.

Pełny tekst źródła
Streszczenie:
Includes bibliographical references (leaves 140-147).
It has been proposed that if a vascular prosthesis is to more closely approximate the mechanical behaviour of a native vessel, it should similarly feature a multi-component structure. One of the components could be a metal support structure, similar to an endovascular stent. The objective of the project was to develop a numerical tool, using the Finite Element Method (FEM) to aid in the development and optimization of such a metallic support structure. This tool was used to simulate the behaviour of different designs under the simulated in vivo conditions. The numerical results of the predicted mechanical behaviour are then analysed.
Style APA, Harvard, Vancouver, ISO itp.
32

Leusink, Debbie. "Advanced numerical tools for aerodynamic optimization of helicopter rotor blades". Thesis, Paris, ENSAM, 2013. http://www.theses.fr/2013ENAM0010.

Pełny tekst źródła
Streszczenie:
La conception aérodynamique des pales du rotor principal d’un hélicoptère doitsimultanément prendre en compte plusieurs objectifs relatifs aux critères du vol stationnaire etvol d’avancement. Cette thèse vise à développer une boucle d’optimisation automatiséecombinant des algorithmes d’optimisation avancés et des outils de simulation. Deux outils desimulation sont employés pour la prédiction des performances rotor : le code de mécanique devol HOST, ainsi que le code de Mécanique des Fluides Numérique (MFN) elsA. Une analyse deces outils est effectuée pour des cas test bien documentés afin d’estimer leur capacité à prédiredes tendances de performances rotor en fonction de la géométrie de pale. L’influence desparamètres numériques est également caractérisée. Aussi, une stratégie d’optimisation estdéveloppée, permettant la prise en compte de plusieurs objectifs et de contraintes complexes,ainsi que la détermination d’optima globaux pour ce problème multimodale. Suivant cescritères, un algorithme génétique (AG) est sélectionné. Afin de réduire le nombre d’évaluationsnécessaires, une stratégie d’optimisation multi-fidélité est proposée : une optimisationpréliminaire utilisant l’AG et HOST est utilisée pour la réduction de l’espace des paramètres ensélectionnant la zone de haute performance. Ensuite, une surface de réponse est construiteavec des calculs haute-fidélité des pales de haute performance comme vu par l’étapepréliminaire. L’optimisation est finalement effectuée sur cette surface de réponse haute-fidélité.L’approche proposée résulte en une augmentation significative des performances rotor, tout enrespectant le critère industriel relatif au nombre de calculs coûteux comme MFN. L’approcheproposée se révèle être un outil efficace pour la conception de pales du rotor principald’hélicoptère
The aerodynamic design of helicopter rotor blades requires taking into accountedmultiple objectives simultaneously, to provide a compromise solution for the conflictingrequirements associated to hover and forward flight conditions. The present work aims atdeveloping an automated optimization based on the combination of advanced optimizationalgorithms and simulation tools. As a preliminary step, candidate simulation methods andoptimization algorithms are assessed in detail. Two simulation methods are employed for thecomputation of rotor performance: the in-house Helicopter Overall Simulation Tool (HOST),based on the blade element method, and ONERA’s Computation Fluid Dynamics (CFD) codeelsA. An in-detail analysis of both simulation tools for well documented test cases is carried out,with focus on their capability of predicting trends of the global rotor performance as a function ofblade geometry. The impact of computation settings is also characterized. Then, an optimizationstrategy is developed, allowing the incorporation of multiple objectives and complex constraints,and the detection of global optima for multi-modal problems. Based on these criteria, a geneticalgorithm (GA) is selected. To reduce the number of simulations required to find optimalsolutions, a Multi-Fidelity Optimization (MFO) strategy is proposed: a preliminary low-fidelity GAoptimization stage based on HOST simulations is used to reduce the design space by selectinga high-performance subspace. Then, a CFD-based surrogate model is constructed on thereduced design space by using a sample of high-performance blade from the low-fidelity step.The final optimization step is run on the high-fidelity surrogate. The proposed MFO approachresults in significant rotor performance improvements while using a far lower number of costlyCFD evaluations of the objective functions with respect to a full GA optimization. The proposedapproach is shown to represent an efficient design tool for industrial helicopter rotor blade
Style APA, Harvard, Vancouver, ISO itp.
33

Stringer, Robert. "Numerical investigation of cross-flow tidal turbine hydrodynamics". Thesis, University of Bath, 2018. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.760981.

Pełny tekst źródła
Streszczenie:
The challenge of tackling global climate change and our increasing reliance on power means that new and diverse renewable energy generation technologies are a necessity for the future. From a number of technologies reviewed at the outset, the cross-flow tidal turbine was chosen as the focus of the research. The numerical investigation begins by choosing to model flow around a circular cylinder as a challenging benchmarking and evaluation case to compare two potential solvers for the ongoing research, ANSYS CFX and OpenFOAM. A number of meshing strategies and solver limitations are extracted, forming a detailed guide on the topic of cylinder lift, drag and Strouhal frequency prediction in its own right. An introduction to cross-flow turbines follows, setting out turbine performance coefficients and a strategy to develop a robust numerical modelling environment with which to capture and evaluate hydrodynamic phenomena. The validation of a numerical model is undertaken by comparison with an experimentally tested lab scale turbine. The resultant numerical model is used to explore turbine performance with varying Reynolds number, concluding with a recommended minimum value for development purposes of Re = 350 × 103 to avoid scalability errors. Based on this limit a large scale numerical simulation of the turbine isconducted and evaluated in detail, in particular, a local flow sampling method is proposed and presented. The method captures flow conditions ahead of the turbine blade at all positions of motion allowing local velocities and angles of attack to be interrogated. The sampled flow conditions are used in the final chapter to construct a novel blade pitching strategy. The result is a highly effective optimisation method which increases peak turbine power coefficient by 20% for only two further case iterations of the numerical solution.
Style APA, Harvard, Vancouver, ISO itp.
34

Scotland, Ian. "Analysis of horizontal deformations to allow the optimisation of geogrid reinforced structures". Thesis, Loughborough University, 2016. https://dspace.lboro.ac.uk/2134/23323.

Pełny tekst źródła
Streszczenie:
Geogrid reinforced structures have been successfully used for over 25 years. However their design procedures have remained largely focused on ultimate failure mechanisms, originally developed for steel reinforcements. These are widely considered over conservative in determining realistic reinforcement and lateral earth stresses. The poor understanding of deformation performance led many design codes to restrict acceptable soils to selected sand and gravel fills, where deformation is not as concerning. Within UK construction there is a drive to reduce wastage, improve efficiency and reduce associated greenhouse gas emissions. For geogrid reinforced structures this could mean increasing reinforcement spacing and reusing weaker locally sourced soils. Both of these strategies increase deformation, raising concern about the lack of understanding and reliable guidance. As a result they fail to fulfil their efficiency potential. This Engineering Doctorate improved the understanding of horizontal deformation by analysing performance using laboratory testing, laser scanning industry structures and numerical modelling. Full-scale models were used to demonstrate a reduction in deformation by decreasing reinforcement spacing. Their results were combined with primary and secondary case studies to create a diverse database. This was used to validate a finite element model, differentiating between two often used construction methods. Its systematic analysis was extended to consider the deformation consequences of using low shear strength granular fills. The observations offered intend to reduce uncertainty and mitigate excessive deformations, which facilitates the further optimisation of designs.
Style APA, Harvard, Vancouver, ISO itp.
35

Raithatha, Ankor Mahendra. "Incremental sheet forming : modelling and path optimisation". Thesis, University of Oxford, 2008. http://ora.ox.ac.uk/objects/uuid:89b0ac1e-cab4-4d80-b352-4f48566c7668.

Pełny tekst źródła
Streszczenie:
Incremental sheet forming (ISF) is a novel metal shaping technology that is economically viable for low-volume manufacturing, customisation and rapid-prototyping. It uses a small tool that is controlled by a computer-numerically controlled sequence and the path taken by this tool over the sheet defines the product geometry. Little is currently known about how to design the tool-path to minimise geometric errors in the formed part. The work here addresses this problem by developing a model based tool-path optimisation scheme for ISF. The key issue is how to generate an efficient model for ISF to use within a path optimisation routine, since current simulation methods are too slow. A proportion of this thesis is dedicated to evaluating the applicability of the rigid plastic assumption for this purpose. Three numerical models have been produced: one based on small strain deformation, one based on limit analysis theory and another that approximates the sheet to a network of rods. All three models are formulated and solved as second-order cone programs (SOCP) and the limit analysis based model is the first demonstration of an upper-bound shell finite element (FE) problem solved as an SOCP. The models are significantly faster than commercially available FE software and simulations are compared with experimental and numerical data, from which it is shown the rigid plastic assumption is suitable for modelling deformation in ISF. The numerical models are still too slow for the path optimisation scheme, so a novel linearised model based on the concept of spatial impulse responses is also formulated and used in an optimal control based tool-path optimisation scheme for producing axisymmetric products with ISF. Off-line and on-line versions of the scheme are implemented on an ISF machine and it is shown that geometric errors are significantly reduced when using the proposed method. This work provides a new structured framework for tool-path design in ISF and it is also a novel use of feedback to compensate for geometrical errors in ISF.
Style APA, Harvard, Vancouver, ISO itp.
36

Bird, Stefan Charles, i stbird@seatiger org. "Adaptive Techniques for Enhancing the Robustness and Performance of Speciated PSOs in Multimodal Environments". RMIT University. Computer Science and IT, 2008. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20081027.122244.

Pełny tekst źródła
Streszczenie:
This thesis proposes several new techniques to improve the performance of speciated particle swarms in multimodal environments. We investigate how these algorithms can become more robust and adaptive, easier to use and able to solve a wider variety of optimisation problems. We then develop a technique that uses regression to vastly improve an algorithm's convergence speed without requiring extra evaluations. Speciation techniques play an important role in particle swarms. They allow an algorithm to locate multiple optima, providing the user with a choice of solutions. Speciation also provides diversity preservation, which can be critical for dynamic optimisation. By increasing diversity and tracking multiple peaks simultaneously, speciated algorithms are better able to handle the changes inherent in dynamic environments. Speciation algorithms often require a user to specify a parameter that controls how species form. This is a major drawback since the knowledge may not be available a priori. If the parameter is incorrectly set, the algorithm's performance is likely to be highly degraded. We propose using a time-based measure to control the speciation, allowing the algorithm to define species far more adaptively, using the population's characteristics and behaviour to control membership. Two new techniques presented in this thesis, ANPSO and ESPSO, use time-based convergence measures to define species. These methods are shown to be robust while still providing highly competitive performance. Both algorithms effectively optimised all of our test functions without requiring any tuning. Speciated algorithms are ideally suited to optimising dynamic environments, however the complexity of these environments makes them far more difficult to design algorithms for. To increase an algorithm's performance it is necessary to determine in what ways it should be improved. While all performance metrics allow optimisation techniques to be compared, they cannot show how to improve an algorithm. Until now this has been done largely by trial and error. This is extremely inefficient, in the same way it is inefficient trying to improve a program's speed without profiling it first. This thesis proposes a new metric that exclusively measures convergence speed. We show that an algorithm can be profiled by correlating the performance as measured by multiple metrics. By combining these two techniques, we can obtain far better insight into how best to improve an algorithm. Using this information, we then propose a local convergence enhancement that greatly increases performance by actively estimating the location of an optimum. The enhancement uses regression to fit a surface to the peak, guiding the search by estimating the peak's true location. By incorporating this technique, the algorithm is able to use the information contained within the fitness landscape far more effectively. We show that by combining the regression with an existing speciated algorithm, we are able to vastly improve the algorithm's performance. This technique will greatly enhance the utility of PSO on problems where fitness evaluations are expensive, or that require fast reaction to change.
Style APA, Harvard, Vancouver, ISO itp.
37

Song, Zixu. "Software engineering abstractions for a numerical linear algebra library". Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/software-engineering-abstractions-for-a-numerical-linear-algebra-library(68304a9b-56db-404b-8ffb-4613f5102c1a).html.

Pełny tekst źródła
Streszczenie:
This thesis aims at building a numerical linear algebra library with appropriate software engineering abstractions. Three areas of knowledge, namely, Numerical Linear Algebra (NLA), Software Engineering and Compiler Optimisation Techniques, are involved. Numerical simulation is widely used in a large number of distinct disciplines to help scientists understand and discover the world. The solutions to frequently occurring numerical problems have been implemented in subroutines, which were then grouped together to form libraries for ease of use. The design, implementation and maintenance of a NLA library require a great deal of work so that the other two topics, namely, software engineering and compiler optimisation techniques have emerged. Generally speaking, these both try to divide the system into smaller and controllable concerns, and allow the programmer to deal with fewer concerns at one time. Band matrix operation, as a new level of abstraction, is proposed for simplifying library implementation and enhancing extensibility for future functionality upgrades. Iteration Space Partitioning (ISP) is applied, in order to make the performance of this generalised implementation for band matrices comparable to that of the specialised implementations for dense and triangular matrices. The optimisation of ISP can be either programmed using the pointcut-advice model of Aspect-Oriented Programming, or integrated as part of a compiler. This naturally leads to a comparison of these two different techniques for resolving one fundamental problem. The thesis shows that software engineering properties of a library, such as modularity and extensibility, can be improved by the use of the appropriate level of abstraction, while performance is either not sacrificed at all, or at least the loss of performance is limited. In other words, the perceived trade-off between the use of high-level abstraction and fast execution is made less significant than previously assumed.
Style APA, Harvard, Vancouver, ISO itp.
38

Rajaguru, Pushparajah. "Reduced order modelling and numerical optimisation approach to reliability analysis of microsystems and power modules". Thesis, University of Greenwich, 2014. http://gala.gre.ac.uk/13593/.

Pełny tekst źródła
Streszczenie:
The principal aim of this PhD program is the development of an optimisation and risk based methodology for reliability and robustness predictions of packaged electronic components. Reliability based design optimisation involves the integration of reduced order modelling, risk analysis and optimisation. The increasing cost of physical prototyping and extensive qualification testing for reliability assessment is making virtual qualification a very attractive alternative for the electronics industry. Given the availability of low cost processing technology and advanced numerical techniques such as finite element analysis, design engineers can now undertake detailed calculations of physical phenomena in electronic packages such as temperature, electromagnetics, and stress. Physics of failure analysis can also be performed using the results from these detailed calculations to predict modes of failure and estimated lifetime of an electronic component. At present the majority of calculations performed using finite element techniques assume that the input parameters are single valued without any variation. Obviously this is not the case as variation in design variables (such as dimensions of the package, operating conditions, etc) can have statistical distributions. The research undertaken in this PhD resulted in the development of software libraries and a toolset which can be used in parallel with finite element analysis to assess the impact of design variable variations on the package reliability and robustness. This resulted in the development of the ROMARA software which now contains a number of best in class reduced order modelling techniques, optimisation algorithms, and stochastic risk assessment procedures. The software has been developed using the C# language and demonstrated for a number of case studies. The case study detailed in this thesis is related to a power electronics IGBT structure and demonstrates the technology for predicting the reliability and robustness of a wirebond interconnect structure that is subjected to electro-thermo-mechanical loads. The design variables investigated in this study included wire-loop ratio, current in the wire, and thickness of the silicon die each represented as input variables with normal distribution. In terms of reliability the damage variable under investigation was the plastic strain at the wire/aluminium pad interface. Using ANSYS for predicting the physics in the package we have demonstrated the ability of the ROMARA code to optimise the design of wirebond, in terms of minimising the induced damage. Other real cases have been investigated using the developed ROMARA software and these are reported in the public domain and briefly detailed in this thesis.
Style APA, Harvard, Vancouver, ISO itp.
39

Cooper, Jonathan Paul. "Automatic validation and optimisation of biological models". Thesis, University of Oxford, 2009. http://ora.ox.ac.uk/objects/uuid:24b96d62-b47c-458d-9dff-79b27dbdc9f2.

Pełny tekst źródła
Streszczenie:
Simulating the human heart is a challenging problem, with simulations being very time consuming, to the extent that some can take days to compute even on high performance computing resources. There is considerable interest in computational optimisation techniques, with a view to making whole-heart simulations tractable. Reliability of heart model simulations is also of great concern, particularly considering clinical applications. Simulation software should be easily testable and maintainable, which is often not the case with extensively hand-optimised software. It is thus crucial to automate and verify any optimisations. CellML is an XML language designed for describing biological cell models from a mathematical modeller’s perspective, and is being developed at the University of Auckland. It gives us an abstract format for such models, and from a computer science perspective looks like a domain specific programming language. We are investigating the gains available from exploiting this viewpoint. We describe various static checks for CellML models, notably checking the dimensional consistency of mathematics, and investigate the possibilities of provably correct optimisations. In particular, we demonstrate that partial evaluation is a promising technique for this purpose, and that it combines well with a lookup table technique, commonly used in cardiac modelling, which we have automated. We have developed a formal operational semantics for CellML, which enables us to mathematically prove the partial evaluation of CellML correct, in that optimisation of models will not change the results of simulations. The use of lookup tables involves an approximation, thus introduces some error; we have analysed this using a posteriori techniques and shown how it may be managed. While the techniques could be applied more widely to biological models in general, this work focuses on cardiac models as an application area. We present experimental results demonstrating the effectiveness of our optimisations on a representative sample of cardiac cell models, in a variety of settings.
Style APA, Harvard, Vancouver, ISO itp.
40

Favarel, Camille Benjamin. "Optimisation de générateurs thermoélectriques pour la production d’électricité". Thesis, Pau, 2014. http://www.theses.fr/2014PAUU3010/document.

Pełny tekst źródła
Streszczenie:
Une des préoccupations majeures de la recherche dans le domaine de l’énergie est la diminution de la production des gaz à effet de serre et la réduction de notre empreinte écologique. Les générateurs thermoélectriques participent à une démarche globale d’efficacité énergétique en convertissant directement une partie de l’énergie thermique qui les traverse en énergie électrique. Ces derniers sont encore peu utilisés et rares sont les travaux qui traitent de leur optimisation. Ce travail a permis d’explorer les stratégies d’intégration des modules thermoélectriques dans les ensembles définis par les utilisateurs finaux en utilisant une méthodologie basée sur une modélisation complète des systèmes du flux de chaleur à la production électrique. Un code numérique couplant les équations de la thermique, de la thermoélectricité et de l’électricité a été développé et permet d’observer l’influence de plusieurs paramètres sur la production d’électricité (débit et température de la source chaude, débit et température de la source froide, type de modules thermoélectriques, emplacement des modules,…). La validation de ce modèle a nécessité la réalisation et l’instrumentation de plusieurs prototypes expérimentaux dont le plus conséquent est une boucle d’air à haute température alimentant un prototype de générateur thermoélectrique modulable. La conception et la réalisation de convertisseurs électriques dédiés, à recherche du point de fonctionnement maximal (MPPT), a permis de tester ces prototypes au point d’adaptation optimale. Enfin, une méthode d’optimisation appliquée au modèle nous délivre le nombre de modules ainsi que leur emplacement pour une production électrique maximale. Un outil de dimensionnement et d’optimisation de générateurs thermoélectriques est maintenant disponible. Il nous a permis tout d’abord d’étudier la faisabilité d’une production d’électricité en zones isolées au travers d’un prototype de cuisinière à bois thermoélectrique. Puis nous avons analysé la faisabilité dans le domaine de l’automobile en se plaçant à un point de fonctionnement précis correspondant au gaz d’échappement
A major concern of research in the field of energy is the decrease in the production of greenhouse gas emissions and reducing our ecological footprint. Thermoelectric generators participate in a comprehensive approach to energy efficiency by directly converting a part of the thermal energy that flows through in electricity. This work explore strategies for integrating thermoelectric modules in sets defined by end users using a methodology based on a complete systems modelling from heat flow to power generation. A numerical code coupling equations of heat transfers and thermoelectricity was developed and used to observe the influence of several parameters on the production of electricity (flow and temperature of the hot source, flow and temperature of the cold source, type of thermoelectric modules, module location...). The validation of this model has necessitated the construction and the instrumentation of several experimental prototypes which for the most important is a hot air loop supplying a prototype flexible thermoelectric generator. The design and the realization of dedicated electrical converters to research the maximum operating point (MPPT) was performed to test these prototypes optimal adaptation issue. Finally, an optimization method applied to the model delivers us the number of modules and their location for maximum power production. A tool for design and optimization of thermoelectric generators is now available. It has allowed us to study the feasibility of an integrated thermoelectric generation in a variety of systems such as the automobile using exhaust gas or a specific cook stove for developing countries
Style APA, Harvard, Vancouver, ISO itp.
41

Li, Yilun. "Numerical methodologies for topology optimization of electromagnetic devices". Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS228.

Pełny tekst źródła
Streszczenie:
L'optimisation de la topologie est la conception conceptuelle d'un produit. En comparaison avec les approches de conception conventionnelles, il peut créer une nouvelle topologie, qui ne pouvait être imaginée à l’avance, en particulier pour la conception d’un produit sans expérience préalable ni connaissance. En effet, la technique de la topologie consistant à rechercher des topologies efficaces à partir de brouillon devient un sérieux atout pour les concepteurs. Bien qu’elle provienne de l'optimisation de la structure, l'optimisation de la topologie en champ électromagnétique a prospéré au cours des deux dernières décennies. De nos jours, l'optimisation de la topologie est devenue le paradigme des techniques d'ingénierie prédominantes pour fournir une méthode de conception quantitative pour la conception technique moderne. Cependant, en raison de sa nature complexe, le développement de méthodes et de stratégies applicables pour l’optimisation de la topologie est toujours en cours. Pour traiter les problèmes et défis typiques rencontrés dans le processus d'optimisation de l'ingénierie, en considérant les méthodes existantes dans la littérature, cette thèse se concentre sur les méthodes d'optimisation de la topologie basées sur des algorithmes déterministes et stochastiques. Les travaile principal et la réalisation peuvent être résumés comme suit: Premièrement, pour résoudre la convergence prématurée vers un point optimal local de la méthode ON/OFF existante, un Tabu-ON/OFF, un Quantum-inspiré Evolutif Algorithme (QEA) amélioré et une Génétique Algorithme (GA) amélioré sont proposés successivement. Les caractéristiques de chaque algorithme sont élaborées et ses performances sont comparées de manière exhaustive. Deuxièmement, pour résoudre le problème de densité intermédiaire rencontré dans les méthodes basées sur la densité et le problème que la topologie optimisée est peu utilisée directement pour la production réelle, deux méthodes d'optimisation de la topologie, à savoir Matérial Isotrope solide avec pénalisation (SIMP)-Fonction de Base Radiale (RBF) et Méthode du Level Set (LSM)-Fonction de Base Radiale (RBF). Les deux méthodes calculent les informations de sensibilité de la fonction objectif et utilisent des optimiseurs déterministes pour guider le processus d'optimisation. Pour le problème posé par un grand nombre de variables de conception, le coût de calcul des méthodes proposées est considérablement réduit par rapport à celui des méthodes de comptabilisation sur des algorithmes stochastiques. Dans le même temps, en raison de l'introduction de la technique de lissage par interpolation de données RBF, la topologie optimisée est plus adaptée aux productions réelles. Troisièmement, afin de réduire les coût informatiques excessifs lorsqu’un algorithme de recherche stochastique est utilisé dans l’optimisation de la topologie, une stratégie de redistribution des variables de conception est proposée. Dans la stratégie proposée, l’ensemble du processus de recherche d’une optimisation de la topologie est divisé en structures en couches. La solution de la couche précédente est défini comme topologie initiale pour la couche d'optimisation suivante, et seuls les éléments adjacents à la limite sont choisis comme variables de conception. Par conséquent, le nombre de variables de conception est réduit dans une certaine mesure; le temps de calcul du processus est ainsi raccourci. Enfin, une méthodologie d’optimisation de topologie multi-objectif basée sur l’algorithme d’optimisation hybride multi-objectif combinant l’Algorithme Génétique de Tri Non dominé II (NSGAII) et l’algorithme d’Evolution Différentielle (DE) est proposée
Topology optimization is the conceptual design of a product. Comparing with conventional design approaches, it can create a novel topology, which could not be imagined beforehand, especially for the design of a product without prior-experiences or knowledge. Indeed, the topology optimization technique with the ability of finding efficient topologies starting from scratch has become a serious asset for the designers. Although originated from structure optimization, topology optimization in electromagnetic field has flourished in the past two decades. Nowadays, topology optimization has become the paradigm of the predominant engineering techniques to provide a quantitative design method for modern engineering design. However, due to its inherent complex nature, the development of applicable methods and strategies for topology optimization is still in progress. To address the typical problems and challenges encountered in an engineering optimization process, considering the existing methods in the literature, this thesis focuses on topology optimization methods based on deterministic and stochastic algorithms. The main work and achievement can be summarized as: Firstly, to solve the premature convergence to a local optimal point of existing ON/OFF method, a Tabu-ON/OFF, an improved Quantum-inspired Evolutionary Algorithm (QEA) and an improved Genetic Algorithm (GA) are proposed successively. The characteristics of each algorithm are elaborated, and its performance is compared comprehensively. Secondly, to solve the intermediate density problem encountered in density-based methods and the engineering infeasibility of the finally optimized topology, two topology optimization methods, namely Solid Isotropic Material with Penalization-Radial Basis Function (SIMP-RBF) and Level Set Method-Radial Basis Function (LSM-RBF) are proposed. Both methods calculate the sensitivity information of the objective function, and use deterministic optimizers to guide the optimizing process. For the problem with a large number of design variables, the computational cost of the proposed methods is greatly reduced compared with those of the methods accounting on stochastic algorithms. At the same time, due to the introduction of RBF data interpolation smoothing technique, the optimized topology is more conducive in actual productions. Thirdly, to reduce the excessive computing costs when a stochastic searching algorithm is used in topology optimization, a design variable redistribution strategy is proposed. In the proposed strategy, the whole searching process of a topology optimization is divided into layered structures. The solution of the previous layer is set as the initial topology for the next optimization layer, and only elements adjacent to the boundary are chosen as design variables. Consequently, the number of design variables is reduced to some extent; and the computation time is thereby shortened. Finally, a multi-objective topology optimization methodology based on the hybrid multi-objective optimization algorithm combining Non-dominated Sorting Genetic Algorithm II (NSGAII) and Differential Evolution (DE) algorithm is proposed. The comparison results on test functions indicate that the performance of the proposed hybrid algorithm is better than those of the traditional NSGAII and Strength Pareto Evolutionary Algorithm 2 (SPEA2), which guarantee the good global optimal ability of the proposed methodology, and enables a designer to handle constraint conditions in a direct way. To validate the proposed topology optimization methodologies, two study cases are optimized and analyzed
Style APA, Harvard, Vancouver, ISO itp.
42

Williamson, Nicholas J. "Numerical modelling of heat and mass transfer and optimisation of a natural draft wet cooling tower". University of Sydney, 2007. http://hdl.handle.net/2123/4123.

Pełny tekst źródła
Streszczenie:
Doctor of Philosophy
The main contribution of this work is to answer several important questions relating to natural draft wet cooling tower (NDWCT) modelling, design and optimisation. Specifically, the work aims to conduct a detailed analysis of the heat and mass transfer processes in a NDWCT, to determine how significant the radial non-uniformity of heat and mass transfer across a NDWCT is, what the underlying causes of the non-uniformity are and how these influence tower performance. Secondly, the work aims to determine what are the consequences of this non-uniformity for the traditional one dimensional design methods, which neglect any two-dimensional air flow or heat transfer effects. Finally, in the context of radial non-uniformity of heat and mass transfer, this work aims to determine the optimal arrangement of fill depth and water distribution across a NDWCT and to quantify the improvement in tower performance using this non-uniform distribution. To this end, an axisymmetric numerical model of a NDWCT has been developed. A study was conducted testing the influence of key design and operating parameters. The results show that in most cases the air flow is quite uniform across the tower due to the significant flow restriction through the fill and spray zone regions. There can be considerable radial non-uniformity of heat transfer and water outlet temperature in spite of this. This is largely due to the cooling load in the rain zone and the radial air flow there. High radial non-uniformity of heat transfer can be expected when the cooling load in the rain zone is high. Such a situation can arise with small droplet sizes, low fill depths, high water flow rates. The results show that the effect of tower inlet height on radial non-uniformity is surprisingly very small. Of the parameters considered the water mass flow rate and droplet size and droplet distribution in the rain zone have the most influence on radial noniv uniformity of heat transfer. The predictions of the axisymmetric numerical model have been compared with a one dimensional NDWCT model. The difference between the predictions of tower cooling range is very low, generally around 1-2%. This extraordinarily close comparison supports the assumptions of one dimensional flow and bulk averaged heat transfer implicit in these models. Under the range of parameters tested here the difference between the CFD models predictions and those of the one dimensional models remained fairly constant suggesting that there is no particular area where the flow/heat transfer becomes so skewed or non-uniform that the one dimensional model predictions begin to fail. An extended one dimensional model, with semi-two dimensional capability, has been developed for use with an evolutionary optimisation algorithm. The two dimensional characteristics are represented through a radial profile of the air enthalpy at the fill inlet which has been derived from the CFD results. The resulting optimal shape redistributes the fill volume from the tower centre to the outer regions near the tower inlet. The water flow rate is also increased here as expected, to balance the cooling load across the tower, making use of the cooler air near the inlet. The improvement has been shown to be very small however. The work demonstrates that, contrary to common belief, the potential improvement from multi-dimensional optimisation is actually quite small.
Style APA, Harvard, Vancouver, ISO itp.
43

Vincent, Jonathan. "The role of domain decomposition in the parallelisation of genetic search for multi-modal numerical optimisation". Thesis, Southampton Solent University, 2001. http://ssudl.solent.ac.uk/1203/.

Pełny tekst źródła
Streszczenie:
This thesis is concerned with the optimisation of multi-modal numerical problems using genetic algorithms. Genetic algorithms are an established technique, inspired by principles of genetics and evolution, and have been successfully utilised in a wide range of applications. However, they are computationally intensive and consequently, addressing problems of increasing size and complexity has led to research into parallelisation. this thesis is concerned with coarse-grained parallelism because of the growing importance of cluster computing. Current parallel genetic algorithm technology offers one coarse-grained approach, usually referred to as the island model. Parallelisation is concerned with the division of a computational system into components which can be executed concurrently on multiple processing nodes. It can be based on a decomposition of either the process or the domain on which it operates. The island model is a process based approach, which divides the genetic algorithm population into a number of co-operating sub-populations. This research examines an alternative approach based on domain decomposition - the search space is divided into a number of regions which are separately optimised. The aims of this research are to determine whether domain decomposition is useful in terms of search performance, and whether it is feasible when there is no a priori knowledge of the search space. It is established, empirically that domain decomposition offers a more robust sampling of the search space. It is further shown that the approach is beneficial when there is an element of deception in the problem. However, domain decomposition is non-trivial when the domain is irregular. the irregularities of the search space result in a computational load imbalance which would reduce the efficiency of a parallel implementation. To address this, a dynamic load-balancing algorithm is developed which adjusts the decomposition of the search space, at run time according to the fitness distribution. Using this algorithm, it is showm that domain decomposition is feasible, and offers significant search advantages in the field of multi-modal numerical optimisation. The performance is compared with a serial implementation and an island model parallel implementation on a number of non-trivial problems. It is concluded that the domain decomposition approach offers superior performance on these problems in terms of rapid convergence and final solution quality. Approaches to the extension and generalisation of the approach are suggested for further work.
Style APA, Harvard, Vancouver, ISO itp.
44

Williamson, N. J. "Numerical modelling of heat and mass transfer and optimisation of a natural draft wet cooling tower". Connect to full text, 2008. http://ses.library.usyd.edu.au/handle/2123/4035.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--University of Sydney, 2007.
Title from title screen (viewed February 12, 2009). Includes graphs and tables. Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy to the School of Aerospace, Mechanical and Mechatronic Engineering. Includes bibliographical references. Also available in print form.
Style APA, Harvard, Vancouver, ISO itp.
45

Ordóñez, Malla Freddy. "Optimisation d'un récepteur solaire haute température à polydispersion de particules". Thesis, Paris Est, 2014. http://www.theses.fr/2014PEST1072/document.

Pełny tekst źródła
Streszczenie:
Les centrales solaires à concentration sont des technologies prometteuses pour la production d'énergie d'origine renouvelable. Celles mettant en œuvre des cycles thermodynamiques à hautes températures, tels que les cycles combinés, permettent d'augmenter l'efficacité de la conversion solaire. Cependant, leurs implantations nécessitent le développement de nouveaux récepteurs à haute température (T > 1100 K), tels que les récepteurs solaires à particules (SPRs). Ce travail porte sur l'optimisation numérique des principaux paramètres pilotant l'efficacité de ce type de récepteurs, l'enjeu principal étant de minimiser les pertes par rayonnement thermique. Dans un premier temps, un modèle simplifié des transferts radiatifs dans un SPR a été développé. Le modèle considère un milieu particulaire soumis à un flux solaire concentré et collimaté. Le milieu émet, absorbe et diffuse le rayonnement de manière anisotrope. L'équation de transfert radiatif est résolue par une méthode à deux-flux (géométrie 1D) avec l'approximation delta-Eddington, permettant une obtention rapide des résultats. Cette méthode a été choisie pour son adéquation aux cas d'émission et de diffusion anisotrope. L'hypothèse de diffusion indépendante est utilisée afin de déterminer les propriétés optiques du milieu. La théorie de Lorenz-Mie et l'approximation de Henyey-Greenstein ont été utilisées pour calculer, respectivement, les efficacités optiques et la fonction de phase des particules. Ce modèle est mis en œuvre avec un algorithme d'optimisation par essaims particulaires, dans le but de déterminer la taille des particules, leur fraction volumique, et leur indice de réfraction optimums. Dans un deuxième temps, six matériaux réels sont sélectionnés afin de tenter de retrouver le résultat optimum obtenu précédemment avec un matériel idéal. Ces matériaux (HfB2, ZrB2, HfC, ZrC, W et SiC) sont pertinents du fait de leur comportement sélectif ou de leur absorptivité élevée. Afin de déterminer leurs indices de réfraction, la relation de dispersion de Kramers-Kronig a été utilisée à partir de données de réflectance issues de la littérature. Trois configurations de récepteurs ont été étudiées : a) un milieu homogène comprenant un seul type de particules, b) un milieu inhomogène comprenant deux matériaux différents, c) un milieu homogène comprenant des particules enrobées. D'après les résultats de ces configurations, les particules de W enrobées de SiC permettent d'atteindre des performances proches du cas idéal optimisé. Enfin, un modèle numérique de transfert thermique par convection et rayonnement a été développé, pour étudier l'influence de l'écoulement sur les pertes radiatives du récepteur. Il est basé sur une géométrie simple constituée d'un écoulement d'un mélange de gaz et de particules circulant entre deux plaques planes, l'une étant une fenêtre par laquelle pénètre perpendiculairement le flux solaire. Le modèle radiatif développé précédemment permet de calculer la divergence du flux radiatif, tandis que l'équation de l'énergie est résolue par une approximation de low-Mach. Ainsi, les conditions de l'écoulement et des propriétés radiatives que minimisent les pertes du récepteur sont déterminés. De futurs travaux pourront être élargis à de nouveaux matériaux candidats pour les récepteurs solaires à particules. Leur index de réfraction pourra être mesuré et comparé aux valeurs théoriques obtenues par les codes développés dans le cadre de ce travail
Solar Particle Receivers (SPRs) are promising candidates to work at high temperatures (T > 1100 K) in Central Solar Power (CSP) plants. They will permit the use of high efficient thermodynamic cycles, such as a combined cycle (Brayton cycle + Rankine cycle). Nevertheless, the optimal conditions that reduce the receiver losses (and consequently maximize the receiver efficiency) still remain to be studied. In this work, the principal parameters that drive the receiver efficiency are numerically optimized. For this end, a simplified radiative model is developed, which allows one to run the high number of simulations needed in such optimization. This model consists in a 1D slab of particulate media submitted to a collimated and concentrated solar flux. The medium emits, absorbs and anisotropically scatters energy. A two-stream method with a delta-Eddington approximation is implemented to fast solve the radiative transfer equation. Among the several two-stream approximations, the one proposed by Joseph et al. (1976) is chosen due to its good treatment of the anisotropic scattering. The volume optical properties are computed under the independent scattering hypothesis, the single-particle optical properties with the Lorenz-Mie theory and the phase function with the Henyey-Greenstein phase function. Such a model is used with a Particle Swarm Optimization algorithm to find the optimal particle size, volume fraction and complex refractive index to be used in the receiver. Once the ideal optimal conditions for a SPR are found, the replication of these results is attempted by using real materials. Six materials (HfB2, ZrB2, HfC, ZrC, W and SiC) are chosen because of their spectral selective behavior or their high absorptivity. At this stage, an important difficulty is the lack of information about the refractive indexes of materials. Therefore, the Kramers-Kronig dispersion relations are utilized to find the refractive indexes from reflectance data. Then, three SPR configurations are considered: (1) a homogeneous medium with only one kind of particles, (2) a medium with a mixture of two materials and, (3) a homogeneous medium with coated particles. The three configuration results are compared with those obtained using particles made of an ideal material. A remarkable result is obtained when W-particles coated with SiC are used. This configuration decreases the radiative losses approaching to the ideal minimal. Finally, the influence of the fluid flow on the radiative losses is studied through the implementation of a convection-radiation heat transfer model. A simple geometry is adopted for a gas-particles mixture flow between two parallel plates, where one of them is a window. The concentrated solar radiation then affects perpendicularly the fluid flow. The energy equation is solved using a low-Mach approximation and the divergence of the radiative flux with the radiative model developed before. A parametric study is conducted to investigate the influence of the optical properties on the radiative losses. In the future, more materials remain to be investigated to be used in solar particle receivers. To this end, the refractive indexes of a number of materials should be measured. The developed codes will be useful for this investigation
Style APA, Harvard, Vancouver, ISO itp.
46

Djemal, Fathi. "Analyse et optimisation des batteurs dynamiques non linéaires". Thesis, Châtenay-Malabry, Ecole centrale de Paris, 2015. http://www.theses.fr/2015ECAP0007/document.

Pełny tekst źródła
Streszczenie:
Les vibrations qui sont en général source de dérangement, d’usure et même destruction des machines et structures mécaniques doivent être contrôlées ou éliminées. Pour cette raison, la lutte contre les vibrations est devenue depuis des années un enjeu majeur pour les chercheurs de laboratoire et de développement dans l’industrie afin de développer des solutions efficaces contre ces problèmes. De nombreuses technologies ont donc été développées. Parmi ces technologies, les absorbeurs de vibration non linéaires présentent des performances importantes dans l’atténuation de vibration sur une large bande de fréquences. C’est dans ce contexte que cette thèse se focalise sur l’analyse et l’optimisation des absorbeurs de vibration non linéaires. L’objectif de cette thèse est d’analyser le comportement dynamique non linéaire des systèmes présentant des absorbeurs de vibration non linéaires. Pour cela, un modèle dynamique d’un système à deux degrés de liberté est développé mettant en équations le comportement non linéaire. La résolution des équations de mouvement est faite par la Méthode Asymptotique Numérique (MAN). La performance de cette méthode est montrée via une comparaison avec la méthode de Newton-Raphson. L’analyse des modes non linéaires du système ayant une non-linéarité cubique est faite par une formulation explicite des Fonctions de Réponse en Fréquence non linéaires (FRFs) et les Modes Normaux Non linéaires (MNNs). Un démonstrateur sur la base d’un système simple à deux degré de liberté est mis en place afin de recaler les modèles envisagés sur la base des résultats expérimentaux trouvés
Vibrations are usually undesired phenomena as they may cause discomfort, disturbance, damage, and sometimes destruction of machines and structures. It must be reduced or controlled or eliminated. For this reason, the vibrations attenuation became a major issue for scientists and researchers in order to develop effective solutions for these problems. Many technologies have been developed. Among these technologies, the nonlinear vibration absorbers have significant performance in the vibration attenuation over a wide frequency band. In this context, this thesis focuses on the analysis and optimization of nonlinear vibration absorbers. The objective of the thesis is to analyze the nonlinear dynamic behavior of systems with nonlinear vibration absorbers. For this, a dynamic model of a two degrees of freedom system is developed. The Asymptotic Numerical Method (ANM) is used to solve the nonlinear equations of motion. The performance of this method is shown via a comparison with the Newton-Raphson method. The nonlinear modal analysis system with cubic nonlinearity is made by an explicit formulation of the nonlinear Frequency Response Functions (FRFs) and Nonlinear Normal Modes (MNNs). An experimental study is performed to validate the numerical results
Style APA, Harvard, Vancouver, ISO itp.
47

Guillaud, Nathanaël. "Simulation et optimisation de forme d'hydroliennes à flux transverse". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAI061.

Pełny tekst źródła
Streszczenie:
Dans le cadre de la production d'électricité par énergie renouvelable, cette thèse a pour objectif de contribuer à l'amélioration des performances hydrodynamiques des hydroliennes à flux transverse conçues par HydroQuest. Pour y parvenir, deux axes d'étude principaux sont proposés. Le premier consiste à améliorer la compréhension de la performance de l'hydrolienne et de l'écoulement en son sein par voie numérique. L'influence du paramètre d'avance ainsi que celle de la solidité de l'hydrolienne sont étudiées. Les écoulements mis en jeux étant complexes, une méthode de type Simulation des Granges Échelles 3D est utilisée afin de les restituer au mieux. Le phénomène de décrochage dynamique, qui apparaît pour certains régimes de fonctionnement de l'hydrolienne, fait l'objet d'une étude à part entière sur un cas de profil oscillant.Le second axe se concentre sur les carénages de l’hydrolienne qui font l'objet d'une procédure d'optimisation numérique. Afin de pouvoir réaliser les nombreuses simulations requises en un temps réaliste, des méthodes de type Unsteady Reynolds-Averaged Navier-Stokes 2D moins coûteuses et fournissant une précision suffisante pour ce type d'étude sont utilisées
Within the renewable electricity production framework, this study aims to contribute to the efficiency improvement of the Vertical Axis Hydrokinetic Turbines designed by HydroQuest. To achieve this objective, two approaches are used. The first consists in the improvement of the comprehension of the turbine efficiency such as the flow through the turbine by numerical means. The influence of the tip speed ratio such as the turbine soldity are investigated. The flow through the turbine is complex. A 3D Large Eddy Simulation type is thus used. The dynamic stall phenomenon which could occur in Vertical Axis Hydrokinetic Turbines is also studied in a oscillating blade configuration.The second approach consists in the numerical optimization of the turbine channeling device. To perform the high number of simulations required, a 2D Unsteady Reynolds-Averaged Navier-Stokes simulation type is used
Style APA, Harvard, Vancouver, ISO itp.
48

Yang, Jianliang. "Condensational growth of atmospheric aerosol particles in an expanding water saturated air flow numerical optimisation and experiment /". [S.l. : s.n.], 1999. http://ArchiMeD.uni-mainz.de/pub/2000/0005/diss.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Temperley, Neil Colin Mechanical &amp Manufacturing Engineering Faculty of Engineering UNSW. "Optimisation of an Ultrasonic Flow Meter Based on Experimental and Numerical Investigation of Flow and Ultrasound Propagation". Awarded by:University of New South Wales. School of Mechanical and Manufacturing Engineering, 2002. http://handle.unsw.edu.au/1959.4/22044.

Pełny tekst źródła
Streszczenie:
This thesis presents a procedure to optimise the shape of a coaxial transducer ultrasonic flow meter. The technique uses separate numerical simulations of the fluid flow and the ultrasound propagation within a meter duct. A new flow meter geometry has been developed, having a significantly improved (smooth and monotonic) calibration curve. In this work the complex fluid flow field and its influence on the propagation of ultrasound in a cylindrical flow meter duct is investigated. A geometric acoustics (ray tracing) propagation model is applied to a flow field calculated by a commercial Computational Fluid Dynamics (CFD) package. The simulation results are compared to measured calibration curves for a variety of meter geometries having varying lengths and duct diameters. The modelling shows reasonable agreement to the calibration characteristics for several meter geometries over a Reynolds number range of 100...100000 (based on bulk velocity and meter duct diameter). Various CFD simulations are validated against flow visualisation measurements, Laser Doppler Velocimetry measurements or published results. The thesis includes software to calculate the acoustic ray propagation and also to calculate the optimal shape for the annular gap around the transducer housings in order to achieve desired flow acceleration. A dimensionless number is proposed to characterise the mean deflection of an acoustic beam due to interaction with a fluid flow profile (or acoustic velocity gradient). For flow in a cylindrical duct, the 'acoustic beam deflection number' is defined as M g* (L/D)^2, where: M is the Mach Number of the bulk velocity; g* is the average non-dimensionalised velocity gradient insonified by the acoustic beam (g* is a function of transducer diameter - typically g* = 0.5...4.5); L is the transducer separation; and D is the duct diameter. Large values of this number indicate considerable beam deflection that may lead to undesirable wall reflections and diffraction effects. For a single path coaxial transducer ultrasonic flow meter, there are practical limits to the length of a flow meter and to the maximum size of a transducer for a given duct diameter. The 'acoustic beam deflection number' characterises the effect of these parameters.
Style APA, Harvard, Vancouver, ISO itp.
50

Kirollos, Benjamin William Mounir. "Aerothermal optimisation of novel cooling schemes for high pressure components using combined theoretical, numerical and experimental techniques". Thesis, University of Oxford, 2015. https://ora.ox.ac.uk/objects/uuid:72168abd-48f7-49c6-a6ef-3d4a9f6cc6e9.

Pełny tekst źródła
Streszczenie:
The continuing maturation of metal laser-sintering technology has presented the opportunity to de-risk the engine design process by experimentally down-selecting high pressure nozzle guide vane (HPNGV) cooling designs using laboratory tests of laser-sintered - instead of cast - parts to assess thermal performance. Such tests are very promising as a reliable predictor of the thermal-paint-engine-test, which is used during certification to validate cooling system designs. In this thesis, conventionally cast and laser-sintered parts are compared in back-to-back experimental tests at engine-representative conditions over a range of coolant mass flow rates. Tests were performed in the University of Oxford Annular Sector Heat Transfer Facility. The aerothermal performance of the cast and laser-sintered parts is shown to be very similar, demonstrating the utility of laser-sintered parts for preliminary engine thermal assessments. It can be shown that in most situations counter-current heat exchanger arrangements outperform co-current arrangements. This concept, though familiar in the heat exchanger community, has not yet been applied to hot-section gas turbine cooling. In this thesis, the performance benefit of novel reverse-pass cooling systems - that is, systems in which the internal coolant flows substantially in the opposite direction to the mainstream flow - is demonstrated numerically and experimentally in film-cooled HPNGVs. It is shown numerically that reverse-pass cooling systems always act to flatten lateral wall temperature variation and to reduce peak metal temperature by maximising internal convective cooling at the point of minimum film cooling effectiveness. Reverse-pass cooling systems therefore require less coolant than other internal flow arrangements to maintain acceptable metal temperatures. The benefits of reverse-pass cooling can be fully realised in systems with long, undisturbed surface length, such as the suction-side (SS) of a HPNGV, afterburner liners, HPNGV platforms, and combustor liners. Three engine-scale HPNGVs with SS reverse-pass cooling systems were subsequently designed using bespoke numerical conjugate heat transfer and aerodynamic models to satisfy engine-realistic aerothermal and manufacturing constraints. The reverse-pass HPNGVs were metal laser-sintered and tested in back-to-back experiments with conventionally cooled HPNGVs in the Annular Sector Heat Transfer Facility. The reverse-pass HPNGVs are shown to reduce peak engine metal temperature by 30 K and reduce mean SS engine metal temperature by 60 K compared to conventionally cooled HPNGVs with the same cooling mass flow. A physically-based infra-red thermography procedure was implemented which takes into account the transmittance of the external optics, the surface emissivity of the object, the black-body temperature-radiometric characteristics of the camera, and the time-varying surrounding radiance. Failure to account for surrounding radiance is shown to result in an absolute error in overall cooling effectiveness of 0.05. A new experimental facility - the Coolant Capacity Rig - was developed in order to measure row-by-row, compartmental and total coolant capacity of HPNGVs to a precision of 0.03%, over a large range of pressure ratios and mass flows using a differential mass flow measurement technique, bypass system, and calibrated mass flow orifice. A novel method for estimating internal loss coefficients from the coolant capacity measurements has been devised which, uniquely, does not require internal pressure measurement.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii