Дисертації з теми "Cost model optimisation"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Cost model optimisation.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-28 дисертацій для дослідження на тему "Cost model optimisation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Wang, Mengyu. "Model-based Optimisation of Mixed Refrigerant LNG Processes." Thesis, The University of Sydney, 2017. http://hdl.handle.net/2123/17387.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Natural gas liquefaction processes are energy and cost intensive. This thesis pursues the optimisation of propane precooled mixed refrigerant (C3MR) processes considering variations in upstream gas well conditions, in order to maximise gas well life. Four objective functions were selected for the design optimisation of the C3MR and dual mixed refrigerant (DMR) processes: 1) total shaft work (W), 2) total capital investment, 3) total annualised cost, and 4) total capital cost of both compressors and main cryogenic heat exchanger (MCHE). Optimisation results show that objective function 4 is more suitable than other objective functions for reducing both W and UA (MCHE design parameter). This leads to 15% reduction in specific power for C3MR and 27% for DMR, while achieving lower UA values relative to baseline. The operation optimisation of the C3MR process and its split propane version (C3MR-SP) was performed using four objective functions: 1) total shaft work, 2-3) two different exergy efficiency expressions, and 4) operating expenditure (OPEX). Objective function 3 results in the lowest specific shaft work 1469 MJ/tonne-LNG. For C3MR-SP, however, the lowest specific shaft work is found to be under objective function 1. A comparison of optimisation results across literature studies is impractical due to dissimilar process conditions, feed gas conditions, product quality, and equipment size. A sensitivity analysis highlights the effect of feed gas conditions on performance of the C3MR. For instance, as LNG production decreases from 3 MTPA to 2.4 MTPA over time, the specific OPEX increases from $128/tonne-LNG to $154/tonne-LNG. A subsequent study was conducted focusing on energy benefits of two configurations: integrating natural gas liquids (NGL) recovery unit with C3MR. An integrated NGL recovery within C3MR shows a 0.74% increase in energy consumption as methane concentration of the feed gas decreases, however a frontend NGL recovery unit only has a 0.18% decrease.
2

Viduto, Valentina. "A risk assessment and optimisation model for minimising network security risk and cost." Thesis, University of Bedfordshire, 2012. http://hdl.handle.net/10547/270440.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Network security risk analysis has received great attention within the scientific community, due to the current proliferation of network attacks and threats. Although, considerable effort has been placed on improving security best practices, insufficient effort has been expanded on seeking to understand the relationship between risk-related variables and objectives related to cost-effective network security decisions. This thesis seeks to improve the body of knowledge focusing on the trade-offs between financial costs and risk while analysing the impact an identified vulnerability may have on confidentiality, integrity and availability (CIA). Both security best practices and risk assessment methodologies have been extensively investigated to give a clear picture of the main limitations in the area of risk analysis. The work begins by analysing information visualisation techniques, which are used to build attack scenarios and identify additional threats and vulnerabilities. Special attention is paid to attack graphs, which have been used as a base to design a novel visualisation technique, referred to as an Onion Skin Layered Technique (OSLT), used to improve system knowledge as well as for threat identification. By analysing a list of threats and vulnerabilities during the first risk assessment stages, the work focuses on the development of a novel Risk Assessment and Optimisation Model (RAOM), which expands the knowledge of risk analysis by formulating a multi-objective optimisation problem, where objectives such as cost and risk are to be minimised. The optimisation routine is developed so as to accommodate conflicting objectives and to provide the human decision maker with an optimum solution set. The aim is to minimise the cost of security countermeasures without increasing the risk of a vulnerability being exploited by a threat and resulting in some impact on CIA. Due to the multi-objective nature of the problem a performance comparison between multi-objective Tabu Search (MOTS) Methods, Exhaustive Search and a multi-objective Genetic Algorithm (MOGA) has been also carried out. Finally, extensive experimentation has been carried out with both artificial and real world problem data (taken from the case study) to show that the method is capable of delivering solutions for real world problem data sets.
3

Burnett, Robert Carlisle. "A trade-off model between cost and reliability during the design phase of software development." Thesis, University of Newcastle Upon Tyne, 1995. http://hdl.handle.net/10443/2104.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This work proposes a method for estimating the development cost of a software system with modular structure taking into account the target level of reliability for that system. The required reliability of each individual module is set in order to meet the overall required reliability of the system. Consequently the individual cost estimates for each module and the overall cost of the software system are linked to the overall required reliability. Cost estimation is carried out during the early design phase, that is, well in advance of any detailed development. Where a satisfactory compromise between cost and reliability is feasible, this will enable a project manager to plan the allocation of resources to the implementation and testing phases so that the estimated total system cost does not exceed the project budget and the estimated system reliability matches the required target. The line of argument developed here is that the operational reliability of a software module can be linked to the effort spent during the testing phase. That is, a higher level of desired reliability will require more testing effort and will therefore cost more. A method is developed which enable us to estimate the cost of development based on an estimate of the number of faults to be found and fixed, in order to achieve the required reliability, using data obtained from the requirements specification and historical data. Using Markov analysis a method is proposed for allocating an appropriate reliability requirement to each module of a modular software system. A formula to calculate an estimate of the overall system reliability is established. Using this formula, a procedure to allocate the reliability requirement for each module is derived using a minimization process, which takes into account the stipulated overall required level of reliability. This procedure allow us to construct some scenarios for cost and the overall required reliability. The foremost application of the outcome of this work is to establish a basis for a trade-off model between cost and reliability during the design phase of the development of a modular software system. The proposed model is easy to understand and suitable for use by a project manager.
4

Wang, Lina. "A multi-disciplinary optimisation model for passenger aircraft wing structures with manufacturing and cost considerations." Thesis, University of Salford, 2000. http://usir.salford.ac.uk/26957/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In traditional aircraft wing structural design, the emphasis has been on pursuing the minimum weight or improved performance. The manufacturing complexity or cost assessments are rarely considered because it is usually assumed that the minimum weight design is also the minimum cost design. However, experience from industry has shown that this is not necessarily the case. It has been realised that in the cases where no manufacturing constraints are imposed, the extra machining cost can erode the advantages of the reduced weight. As manufacturing cost includes material cost and machining cost, whilst reducing weight can reduce the material cost, if the manufacturing complexity increases greatly as a result the overall cost may not go down. Indeed, if the manufacturing complexity is not checked, the machining cost could increase by more than the amount by which the material cost reduces. To enable the structural manufacturing complexity to be controlled, manufacturing constraints are established in this thesis and integrated into the optimisation of the aircraft wing structural design. As far as the manufacturing complexity is concerned, attention has been paid to both 3-axis and 5-axis machining. The final designs of optimisations with manufacturing constraints prove the efficiency of these constraints in guiding the design in the manufacturing-feasible direction.
5

Low, Wei Zhe. "Towards cost model-driven log-based business process improvement." Thesis, Queensland University of Technology, 2016. https://eprints.qut.edu.au/97727/1/Wei%20Zhe_Low_Thesis.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This doctoral study focused on analysing business process execution histories to initiate evidence-based business process improvement activities. The researcher developed techniques to explore and visualise better ways of executing a business process, as well as to analyse the impact of the changes towards cost reduction. This research enables organisations to gain a better understanding of how the same business process can be performed in a more efficient manner, taking into consideration the trade-offs between processing time, cost, and employee utilisation.
6

Moberg, Pontus, and Filip Svensson. "Cost Optimisation through Statistical Quality Control : A case study on the plastic industry." Thesis, Blekinge Tekniska Högskola, Institutionen för industriell ekonomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21922.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background. Shewhart was the first to describe the possibilities that come with having a statistically robust process in 1924. Since his discovery, the importance of a robust process became more apparent and together with the consequences of an unstable process. A firm with a manufacturing process that is out of statistical control tends to waste money, increase risks, and provide an uncertain quality to its customers. The framework of Statistical Quality Control has been developed since its founding, and today it is a well-established tool used in several industries with successful results. When it was first thought of, complicated calculations had to be performed and was performed manually. With digitalisation, the quality tools can be used in real-time, providing high-precision accuracy on the quality of the product. Despite this, not all firms nor industries have started using these tools as of today.    The costs that occur in relation to the quality, either as a consequence of maintaining good quality or that arises from poor quality, are called Cost of Quality. These are often displayed through one of several available cost models. In this thesis, we have created a cost model that was heavily inspired by the P-A-F model. Several earlier studies have shown noticeable results by using SPC, COQ or a combination of them both.     Objectives. The objective of this study is to determine if cost optimisation could be utilised through SQC implementation. The cost optimisation is a consequence of an unstable process and the new way of thinking that comes with SQC. Further, it aims to explore the relationship between cost optimisation and SQC. Adding a layer of complexity and understanding to the spread of Statistical Quality Tools and their importance for several industries. This will contribute to tightening the bonds of production economics, statistical tools and quality management even further.   Methods. This study made use of two closely related methodologies, combining SPC with Cost of Quality. The combination of these two hoped to demonstrate a possible cost reduction through stabilising the process. The cost reduction was displayed using an optimisation model based on the P-A-F (Prevention, Appraisal, External Failure and Internal Failure) and further developed by adding a fifth parameter for optimising materials (OM). Regarding whether the process was in control or not, we focused on the thickness of the PVC floor, 1008 data points over three weeks were retrieved from the production line, and by analysing these, a conclusion on whether the process was in control could be drawn.    Results. Firstly, none of the three examined weeks were found to be in statistical control, and therefore, nor were the total sample. Through the assumption of the firm achieving 100% statistical control over their production process, a possible cost reduction of 874 416 SEK yearly was found.    Conclusions. This study has proven that through focusing on stabilising the production process and achieving control over their costs related to quality, possible significant yearly savings can be achieved. Furthermore, an annual cost reduction was found by optimising the usage of materials by relocating the ensuring of thickness variation from post-production to during the production.
7

Liu, Tianxao. "Proposition d'un cadre générique d'optimisation de requêtes dans les environnements hétérogènes et répartis." Thesis, Cergy-Pontoise, 2011. http://www.theses.fr/2011CERG0513.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans cette thèse, nous proposons un cadre générique d'optimisation de requêtes dans les environnements hétérogènes répartis. Nous proposons un modèle générique de description de sources (GSD), qui permet de décrire tous les types d'informations liées au traitement et à l'optimisation de requêtes. Avec ce modèle, nous pouvons en particulier obtenir les informations de coût afin de calculer le coût des différents plans d'exécution. Notre cadre générique d'optimisation fournit les fonctions unitaires permettant de mettre en œuvre les procédures d'optimisation en appliquant différentes stratégies de recherche. Nos résultats expérimentaux mettent en évidence la précision du calcul de coût avec le modèle GSD et la flexibilité de notre cadre générique d'optimisation lors du changement de stratégie de recherche. Notre cadre générique d'optimisation a été mis en œuvre et intégré dans un produit d'intégration de données (DVS) commercialisé par l'entreprise Xcalia - Progress Software Corporation. Pour des requêtes contenant beaucoup de jointures inter-site et interrogeant des sources de grand volume, le temps de calcul du plan optimal est de l'ordre de 2 secondes et le temps d'exécution du plan optimal est réduit de 28 fois par rapport au plan initial non optimisé
This thesis proposes a generic framework for query optimization in heterogeneous and distributed environments. We propose a generic source description model (GSD), which allows describing any type of information related to query processing and optimization. With GSD, we can use cost information to calculate the costs of execution plans. Our generic framework for query optimization provides a set of unitary functions used to perform optimization by applying different search strategies. Our experimental results show the accuracy of cost calculus when using GSD, and the flexibility of our generic framework when changing search strategies. Our proposed approach has been implemented and integrated in a data integration product (DVS) licensed by Xcalia – Progress Software Corporation. For queries with many inter-site joins accessing large size data sources, the time used for finding the optimal plan is in the order of 2 seconds, and the execution time of the optimized plan is reduced by 28 times, as compared with the execution time of the non optimized original plan
8

Belghoul, Abdeslem. "Optimizing Communication Cost in Distributed Query Processing." Thesis, Université Clermont Auvergne‎ (2017-2020), 2017. http://www.theses.fr/2017CLFAC025/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans cette thèse, nous étudions le problème d’optimisation du temps de transfert de données dans les systèmes de gestion de données distribuées, en nous focalisant sur la relation entre le temps de communication de données et la configuration du middleware. En réalité, le middleware détermine, entre autres, comment les données sont divisées en lots de F tuples et messages de M octets avant d’être communiqués à travers le réseau. Concrètement, nous nous concentrons sur la question de recherche suivante : étant donnée requête Q et l’environnement réseau, quelle est la meilleure configuration de F et M qui minimisent le temps de communication du résultat de la requête à travers le réseau?A notre connaissance, ce problème n’a jamais été étudié par la communauté de recherche en base de données.Premièrement, nous présentons une étude expérimentale qui met en évidence l’impact de la configuration du middleware sur le temps de transfert de données. Nous explorons deux paramètres du middleware que nous avons empiriquement identifiés comme ayant une influence importante sur le temps de transfert de données: (i) la taille du lot F (c’est-à-dire le nombre de tuples dans un lot qui est communiqué à la fois vers une application consommant des données) et (ii) la taille du message M (c’est-à-dire la taille en octets du tampon du middleware qui correspond à la quantité de données à transférer à partir du middleware vers la couche réseau). Ensuite, nous décrivons un modèle de coût permettant d’estimer le temps de transfert de données. Ce modèle de coût est basé sur la manière dont les données sont transférées entre les noeuds de traitement de données. Notre modèle de coût est basé sur deux observations cruciales: (i) les lots et les messages de données sont communiqués différemment sur le réseau : les lots sont communiqués de façon synchrone et les messages dans un lot sont communiqués en pipeline (asynchrone) et (ii) en raison de la latence réseau, le coût de transfert du premier message d’un lot est plus élevé que le coût de transfert des autres messages du même lot. Nous proposons une stratégie pour calibrer les poids du premier et non premier messages dans un lot. Ces poids sont des paramètres dépendant de l’environnement réseau et sont utilisés par la fonction d’estimation du temps de communication de données. Enfin, nous développons un algorithme d’optimisation permettant de calculer les valeurs des paramètres F et M qui fournissent un bon compromis entre un temps optimisé de communication de données et une consommation minimale de ressources. L’approche proposée dans cette thèse a été validée expérimentalement en utilisant des données issues d’une application en Astronomie
In this thesis, we take a complementary look to the problem of optimizing the time for communicating query results in distributed query processing, by investigating the relationship between the communication time and the middleware configuration. Indeed, the middleware determines, among others, how data is divided into batches and messages before being communicated over the network. Concretely, we focus on the research question: given a query Q and a network environment, what is the best middleware configuration that minimizes the time for transferring the query result over the network? To the best of our knowledge, the database research community does not have well-established strategies for middleware tuning. We present first an intensive experimental study that emphasizes the crucial impact of middleware configuration on the time for communicating query results. We focus on two middleware parameters that we empirically identified as having an important influence on the communication time: (i) the fetch size F (i.e., the number of tuples in a batch that is communicated at once to an application consuming the data) and (ii) the message size M (i.e., the size in bytes of the middleware buffer, which corresponds to the amount of data that can be communicated at once from the middleware to the network layer; a batch of F tuples can be communicated via one or several messages of M bytes). Then, we describe a cost model for estimating the communication time, which is based on how data is communicated between computation nodes. Precisely, our cost model is based on two crucial observations: (i) batches and messages are communicated differently over the network: batches are communicated synchronously, whereas messages in a batch are communicated in pipeline (asynchronously), and (ii) due to network latency, it is more expensive to communicate the first message in a batch compared to any other message that is not the first in its batch. We propose an effective strategy for calibrating the network-dependent parameters of the communication time estimation function i.e, the costs of first message and non first message in their batch. Finally, we develop an optimization algorithm to effectively compute the values of the middleware parameters F and M that minimize the communication time. The proposed algorithm allows to quickly find (in small fraction of a second) the values of the middleware parameters F and M that translate a good trade-off between low resource consumption and low communication time. The proposed approach has been evaluated using a dataset issued from application in Astronomy
9

Verrecht, Bart. "Optimisation of a hollow fibre membrane bioreactor for water reuse." Thesis, Cranfield University, 2010. http://dspace.lib.cranfield.ac.uk/handle/1826/6779.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Over the last two decades, implementation of membrane bioreactors (MBRs) has increased due to their superior effluent quality and low plant footprint. However, they are still viewed as a high-cost option, both with regards to capital and operating expenditure (capex and opex). The present thesis extends the understanding of the impact of design and operational parameters of membrane bioreactors on energy demand, and ultimately whole life cost. A simple heuristic aeration model based on a general algorithm for flux vs. aeration shows the benefits of adjusting the membrane aeration intensity to the hydraulic load. It is experimentally demonstrated that a lower aeration demand is required for sustainable operation when comparing 10:30 to continuous aeration, with associated energy savings of up to 75%, without being penalised in terms of the fouling rate. The applicability of activated sludge modelling (ASM) to MBRs is verified on a community-scale MBR, resulting in accurate predictions of the dynamic nutrient profile. Lastly, a methodology is proposed to optimise the energy consumption by linking the biological model with empirical correlations for energy demand, taking into account of the impact of high MLSS concentrations on oxygen transfer. The determining factors for costing of MBRs differ significantly depending on the size of the plant. Operational cost reduction in small MBRs relies on process robustness with minimal manual intervention to suppress labour costs, while energy consumption, mainly for aeration, is the major contributor to opex for a large MBR. A cost sensitivity analysis shows that other main factors influencing the cost of a large MBR, both in terms of capex and opex, are membrane costs and replacement interval, future trends in energy prices, sustainable flux, and the average plant utilisation which depends on the amount of contingency built in to cope with changes in the feed flow.
10

Antomarchi, Anne-Lise. "Conception et pilotage d'un atelier intégrant la fabrication additive." Thesis, Université Clermont Auvergne‎ (2017-2020), 2019. http://www.theses.fr/2019CLFAC035/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
La fabrication additive est un domaine en plein essor. Cependant, les industriels sont aujourd’hui dans une phase d’interrogation sur l’utilisation de ce procédé dans le cadre d’une production de masse. La problématique posée dans le cadre de ces travaux de recherche est : Comment rendre viable, industriellement, le procédé de fusion sur lit de poudre ? Nos travaux abordent la conception et le pilotage d’ateliers intégrant la fabrication additive et le processus complet d’obtention de la pièce selon les trois niveaux de décision : stratégique, tactique et opérationnel. D’un point du vue stratégique, des décisions fortes d’investissement, de sélection de machines et de choix d’organisation sont à prendre avec des enjeux économiques importants. L’objectif est de définir une méthode d’optimisation multicritère pour la conception modulaire d’un système de production intégrant la fabrication additive en présence de données incertaines, optimale sur le long terme et sur le court terme. D’un point de vue tactique, toutes les pièces ne sont pas forcément des candidates pertinentes pour la fabrication additive. Dans ces travaux, nous avons développé un outil d’aide à la décision qui évalue la pertinence ou non de la fabrication additive pour l’obtention des pièces dans une approche globale des coûts. Au niveau opérationnel, nous proposons un outil basé sur la simulation de flux qui permet de passer des commandes aux ordres de fabrication et leur ordonnancement de manière à garantir l’efficience de l’atelier. Ces travaux de recherche sont développés en lien avec des acteurs du monde industriel : AddUp, MBDA et Dassault qui alimentent nos travaux et nous permettent de confronter nos outils à une réalité industrielle
The additive manufacturing is a field on the rise. However, companies wonder about the use of additive manufacturing for mass production. The problem raised in the context of this thesis is: How to make the process of sintering laser melting industrially viable? Our work focuses on the design and on the management of workshops integrating the additive manufacturing and of the complete process to obtain part according to three levels of decision: strategic, tactic and operational. About the strategic level, strong decisions of investment, machines selection and organization choice are taken with important economic issues. The aim is to define a multicriteria optimization method for the modular design of a production system integrating the additive manufacturing in the presence of uncertain data, optimal in the long term and the short term. From a tactical point of view, not all parts are necessarily relevant candidates for additive manufacturing. In this work, we developed a decision support tool that evaluates the relevance or not of additive manufacturing to obtain parts in a global cost approach. At the operational level, we offer a tool based on flow simulation that allows orders to be placed to production orders and their scheduling in order to guarantee the efficiency of the workshop. This research work is developed in collaboration with companies: AddUp, MBDA and Dassault, who contribute to our work and enable us to compare our tools with an industrial reality
11

Sandve, Kjell. "Cost analysis and optimal maintenance planning for monotone, repairable systems." Thesis, Robert Gordon University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336620.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Martinod, Restrepo Ronald Mauricio. "Politiques d’exploitation et de maintenance intégrées pour l’optimisation économique, sociétale et environnementale des systèmes de transports urbains interconnectés." Electronic Thesis or Diss., Université de Lorraine, 2021. http://www.theses.fr/2021LORR0069.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Les systèmes de transport public urbain influencent l'infrastructure des agglomérations et la vie de leurs habitants tout en stimulant directement l'économie. Les systèmes de transport public urbain intelligents contribuent à améliorer la qualité de vie et l'environnement dans les villes. Le développement rapide des solutions de transport urbain a conduit de très nombreux opérateurs à se porter sur ce marché empêchant ainsi une logique globale de l’offre. Ces optimisations discrètes, privées de toutes concertations entre opérateurs de transport intervenant sur un même périmètre, interdit l’identification d’un optimum global. En conséquence, le fonctionnement inefficace des systèmes de transport public urbain ne réduit pas nécessairement la charge environnementale, et les opérateurs de transport urbain peuvent ne pas être en mesure de la gérer de manière durable. Pour répondre à ces défis, cette thèse propose une méthodologie associée à des modèles mathématiques qui sont développés à travers des approches d’optimisation pour une gestion systémique des réseaux de transport public multimodal, et ce afin d’assurer le meilleur taux de service aux usagers tout en minimisant les coûts et les externalités sociétales afin de satisfaire au principe de durabilité, fréquemment exprimé dans les plans de développement urbains
Urban public transport systems influence the infrastructure of urban areas and the lives of their inhabitants while directly stimulating the economy. Intelligent urban public transport systems help to improve the quality of life and the environment in cities. The rapid development of urban transport solutions has led to a large number of operators entering the market, thus preventing a global optimum. These discrete optimisations, without any articulation between transport operators, avoid the identification of a global optimum. As a result, the inefficient operation of urban public transport systems does not necessarily reduce the environmental cost. To address these challenges, this thesis proposes a methodology associated with mathematical models developing optimisation approaches for multimodal public transport networks, for achieving the best service policy while minimising operation costs in order to satisfy the principle of sustainability, frequently expressed in urban development goals
13

Charnay, Clément. "Enhancing supervised learning with complex aggregate features and context sensitivity." Thesis, Strasbourg, 2016. http://www.theses.fr/2016STRAD025/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans cette thèse, nous étudions l'adaptation de modèles en apprentissage supervisé. Nous adaptons des algorithmes d'apprentissage existants à une représentation relationnelle. Puis, nous adaptons des modèles de prédiction aux changements de contexte.En représentation relationnelle, les données sont modélisées par plusieurs entités liées par des relations. Nous tirons parti de ces relations avec des agrégats complexes. Nous proposons des heuristiques d'optimisation stochastique pour inclure des agrégats complexes dans des arbres de décisions relationnels et des forêts, et les évaluons sur des jeux de données réelles.Nous adaptons des modèles de prédiction à deux types de changements de contexte. Nous proposons une optimisation de seuils sur des modèles à scores pour s'adapter à un changement de coûts. Puis, nous utilisons des transformations affines pour adapter les attributs numériques à un changement de distribution. Enfin, nous étendons ces transformations aux agrégats complexes
In this thesis, we study model adaptation in supervised learning. Firstly, we adapt existing learning algorithms to the relational representation of data. Secondly, we adapt learned prediction models to context change.In the relational setting, data is modeled by multiples entities linked with relationships. We handle these relationships using complex aggregate features. We propose stochastic optimization heuristics to include complex aggregates in relational decision trees and Random Forests, and assess their predictive performance on real-world datasets.We adapt prediction models to two kinds of context change. Firstly, we propose an algorithm to tune thresholds on pairwise scoring models to adapt to a change of misclassification costs. Secondly, we reframe numerical attributes with affine transformations to adapt to a change of attribute distribution between a learning and a deployment context. Finally, we extend these transformations to complex aggregates
14

Olsson, Leif. "Optimisation of forest road investments and the roundwood supply chain /." Umeå : Dept. of Forest Economics, Swedish Univ. of Agricultural Sciences, 2004. http://epsilon.slu.se/s310.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Bester, Albertus J. "A multi-objective approach to incorporate indirect costs into optimisation models of waterborne sewer systems." Thesis, Stellenbosch : University of Stellenbosch, 2011. http://hdl.handle.net/10019.1/6824.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (MScEng (Civil Engineering))--University of Stellenbosch, 2011.
ENGLISH ABSTRACT: Waterborne sewage system design and expansion objectives are often focused on minimising initial investment while increasing system capacity and meeting hydraulic requirements. Although these objectives make good sense in the short term, the solutions obtained might not represent the optimal cost-effective solution to the complete useful life of the system. Maintenance and operation of any system can have a significant impact on the life-cycle cost. The costing process needs to be better understood, which include maintenance and operation criteria in the design of a sewer system. Together with increasing public awareness regarding global warming and environmental degradation, environmental impact, or carbon cost, is also an important factor in decisionmaking for municipal authorities. This results in a multiplicity of different objectives, which can complicate the decisions faced by waterborne sewage utilities. Human settlement and migration is seen as the starting point of expansion problems. An investigation was conducted into the current growth prediction models for municipal areas in order to determine their impact on future planning and to assess similarities between the models available. This information was used as a platform to develop a new method incorporating indirect costs into models for planning waterborne sewage systems. The need to balance competing objectives such as minimum cost, optimal reliability, and minimum environmental impact was identified. Different models were developed to define the necessary criteria, thus minimising initial investment, operating cost and environmental impact, while meeting hydraulic constraints. A non-dominated sorting genetic algorithm (NSGA-II) was applied to certain waterborne sewage system (WSS) scenarios that simulated the evolutionary processes of genetic selection, crossover, and mutation to find a number of suitable solutions that balance all of the given objectives. Stakeholders could in future apply optimisation results derived in this thesis in the decision making process to find a solution that best fits their concerns and priorities. Different models for each of the above-mentioned objectives were installed into a multi-objective NSGA and applied to a hypothetical baseline sewer system problem. The results show that the triple-objective optimisation approach supplies the best solution to the problem. This approach is currently not applied in practice due to its inherent complexities. However, in the future this approach may become the norm.
AFRIKAANSE OPSOMMING: Spoelafvoering rioolstelsel ontwerp en uitbreiding doelwitte is dikwels gefokus op die vermindering van aanvanklike belegging, terwyl dit die verhoging van stelsel kapasiteit insluit en ook voldoen aan hidrouliese vereistes. Alhoewel hierdie doelwitte goeie sin maak in die kort termyn, sal die oplossings verkry dikwels nie die optimale koste-effektiewe oplossing van die volledige nuttige lewensduur van die stelsel verteenwoordig nie. Bedryf en instandhouding van 'n stelsel kan 'n beduidende impak op die lewensiklus-koste hê, en die kostebepalings proses moet beter verstaan word en die nodige kriteria ingesluit word in die ontwerp van 'n rioolstelsel. Saam met 'n toenemende openbare bewustheid oor aardverwarming en die agteruitgang van die omgewing, is omgewingsimpak, of koolstof koste, 'n belangrike faktor in besluitneming vir munisipale owerhede. As gevolg hiervan, kan die diversiteit van die verskillende doelwitte die besluite wat munisipale besluitnemers in die gesig staar verder bemoeilik. Menslike vestiging en migrasie is gesien as die beginpunt van die uitbreiding probleem. 'n Ondersoek na die huidige groeivoorspelling modelle vir munisipale gebiede is van stapel gestuur om hul impak op die toekomstige beplanning te bepaal, en ook om die ooreenkomstes tussen die modelle wat beskikbaar is te asesseer. Hierdie inligting is gebruik as 'n platform om ‘n nuwe metode te ontwikkel wat indirekte kostes inkorporeer in die modelle vir die beplanning van spoelafvoer rioolstelsels. Die behoefte is geïdentifiseer om meedingende doelwitte soos minimale aanvanklike koste, optimale betroubaarheid en minimum invloed op die omgewing te balanseer. Verskillende modelle is ontwikkel om die bogenoemde kriteria te definiëer, in die strewe na die minimaliseering van aanvanklike belegging, bedryfskoste en omgewingsimpak, terwyl onderhewig aan hidrouliese beperkinge. ‘n Nie-gedomineerde sorteering genetiese algoritme (NSGA-II), istoegepas op sekere spoelafvoering rioolstelsel moontlikhede wat gesimuleerde evolusionêre prosesse van genetiese seleksie, oorplasing, en mutasie gebruik om 'n aantal gepaste oplossings te balanseer met inagname van al die gegewe doelwitte. Belanghebbendes kan in die toekoms gebruik maak van die resultate afgelei in hierdie tesis in besluitnemings prosesse om die bes-passende oplossing vir hul bekommernisse en prioriteite te vind. Verskillende modelle vir elk van die bogenoemde doelwitte is geïnstalleer in die nie-gedomineerde sorteering genetiese algoritme en toegepas op 'n hipotetiese basislyn rioolstelsel probleem. Die resultate toon dat die drie-objektief optimalisering benadering die beste oplossing vir die probleem lewer. Hierdie benadering word tans nie in die praktyk toegepas nie, as gevolg van sy inherente kompleksiteite. Desnieteenstaande, kan hierdie benadering in die toekoms die norm word.
16

Moumen, Chiraz. "Une méthode d'optimisation hybride pour une évaluation robuste de requêtes." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30070/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
La qualité d'un plan d'exécution engendré par un optimiseur de requêtes est fortement dépendante de la qualité des estimations produites par le modèle de coûts. Malheureusement, ces estimations sont souvent imprécises. De nombreux travaux ont été menés pour améliorer la précision des estimations. Cependant, obtenir des estimations précises reste très difficile car ceci nécessite une connaissance préalable et détaillée des propriétés des données et des caractéristiques de l'environnement d'exécution. Motivé par ce problème, deux approches principales de méthodes d'optimisation ont été proposées. Une première approche s'appuie sur des valeurs singulières d'estimations pour choisir un plan d'exécution optimal. A l'exécution, des statistiques sont collectées et comparées à celles estimées. En cas d'erreur d'estimation, une ré-optimisation est déclenchée pour le reste du plan. A chaque invocation, l'optimiseur associe des valeurs spécifiques aux paramètres nécessaires aux calculs des coûts. Cette approche peut ainsi induire plusieurs ré-optimisations d'un plan, engendrant ainsi de mauvaises performances. Dans l'objectif d'éviter cela, une approche alternative considère la possibilité d'erreurs d'estimation dès la phase d'optimisation. Ceci est modélisé par l'utilisation d'un ensemble de points d'estimations pour chaque paramètre présumé incertain. L'objectif est d'anticiper la réaction à une sous-optimalité éventuelle d'un plan d'exécution. Les méthodes dans cette approche cherchent à générer des plans robustes dans le sens où ils sont capables de fournir des performances acceptables et stables pour plusieurs conditions d'exécution. Ces méthodes supposent souvent qu'il est possible de trouver un plan robuste pour l'ensemble de points d'estimations considéré. Cette hypothèse reste injustifiée, notamment lorsque cet ensemble est important. De plus, la majorité de ces méthodes maintiennent sans modification un plan d'exécution jusqu'à la terminaison. Cela peut conduire à de mauvaises performances en cas de violation de la robustesse à l'exécution. Compte tenu de ces constatations, nous proposons dans le cadre de cette thèse une méthode d'optimisation hybride qui vise deux objectifs : la production de plans d'exécution robustes, notamment lorsque l'incertitude des estimations utilisées est importante, et la correction d'une violation de la robustesse pendant l'exécution. Notre méthode s'appuie sur des intervalles d'estimations calculés autour des paramètres incertains, pour produire des plans d'exécution robustes. Ces plans sont ensuite enrichis par des opérateurs dits de contrôle et de décision. Ces opérateurs collectent des statistiques à l'exécution et vérifient la robustesse du plan en cours. Si la robustesse est violée, ces opérateurs sont capables de prendre des décisions de corrections du reste du plan sans avoir besoin de rappeler l'optimiseur. Les résultats de l'évaluation des performances de notre méthode indiquent qu'elle fournit des améliorations significatives dans la robustesse d'évaluation de requêtes
The quality of an execution plan generated by a query optimizer is highly dependent on the quality of the estimates produced by the cost model. Unfortunately, these estimates are often imprecise. A body of work has been done to improve estimate accuracy. However, obtaining accurate estimates remains very challenging since it requires a prior and detailed knowledge of the data properties and run-time characteristics. Motivated by this issue, two main optimization approaches have been proposed. A first approach relies on single-point estimates to choose an optimal execution plan. At run-time, statistics are collected and compared with estimates. If an estimation error is detected, a re-optimization is triggered for the rest of the plan. At each invocation, the optimizer uses specific values for parameters required for cost calculations. Thus, this approach can induce several plan re-optimizations, resulting in poor performance. In order to avoid this, a second approach considers the possibility of estimation errors at the optimization time. This is modelled by the use of multi-point estimates for each error-prone parameter. The aim is to anticipate the reaction to a possible plan sub-optimality. Methods in this approach seek to generate robust plans, which are able to provide good performance for several run-time conditions. These methods often assume that it is possible to find a robust plan for all expected run-time conditions. This assumption remains unjustified. Moreover, the majority of these methods maintain without modifications an execution plan until the termination. This can lead to poor performance in case of robustness violation at run-time. Based on these findings, we propose in this thesis a hybrid optimization method that aims at two objectives : the production of robust execution plans, particularly when the uncertainty in the used estimates is high, and the correction of a robustness violation during execution. This method makes use of intervals of estimates around error-prone parameters. It produces execution plans that are likely to perform reasonably well over different run-time conditions, so called robust plans. Robust plans are then augmented with what we call check-decide operators. These operators collect statistics at run-time and check the robustness of the current plan. If the robustness is violated, check-decide operators are able to make decisions for plan modifications to correct the robustness violation without a need to recall the optimizer. The results of performance studies of our method indicate that it provides significant improvements in the robustness of query processing
17

Li, Fengfeng. "Multi-criteria optimization of group replacement schedules for distributed water pipeline assets." Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/66195/1/Fengfeng_Li_Thesis.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis presents a multi-criteria optimisation study of group replacement schedules for water pipelines, which is a capital-intensive and service critical decision. A new mathematical model was developed, which minimises total replacement costs while maintaining a satisfactory level of services. The research outcomes are expected to enrich the body of knowledge of multi-criteria decision optimisation, where group scheduling is required. The model has the potential to optimise replacement planning for other types of linear asset networks resulting in bottom-line benefits for end users and communities. The results of a real case study show that the new model can effectively reduced the total costs and service interruptions.
18

Hammoudan, Zakaria. "Production and delivery integrated scheduling problems in multi-transporter multi-custumer supply chain with costs considerations." Thesis, Belfort-Montbéliard, 2015. http://www.theses.fr/2015BELF0267/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
La coordination des activités dans une chaîne logistique a suscité récemment beaucoup d'attention dans larecherche de gestion. Dans une chaîne logistique typique, des produits finis sont produits et transportés pour êtrestockage temporairement ou arriver directement chez clients. Pour réaliser la représentation opérationnelleoptimale, la coordination et l'intégration de la production, de la livraison, et du stockage devient très importante.L'étude récente a considéré le coût de stockage de client avec le coût fixe de transport ou la taille fixe des lots, cequi est irréaliste.Dans cette thèse, nous étudions la coordination de traitement des produits en lots et l'ordonnancement des lots, quiinclut la coordination du traitement en lots des produits dans les groupes après l'étape de production,l'ordonnancement des lots après les productions exigent la livraison du fournisseur ainsi que le stockage desproduits aux clients. Ce travail focalise sur les cas du simple-fournisseur/plusieurs-clients et le cas de simplefournisseur/plusieurs-transporteurs. Pour le premier scénario avec plusieurs-clients, deux modèles illustrent letransfert des lots aux clients. Dans le premier modèle, nous avons considéré un simple-fournisseur/plusieurs-clientsavec un transporteur disponible pour servir les clients sans considération du problème de tournée de véhicule. Puis,dans le deuxième modèle, nous avons considéré un simple-fournisseur/plusieurs-clients avec plusieurs transportersdisponibles pour servir les clients. Différentes hypothèses sont proposées et comparées dans le dernier chapitre.Pour ce qui concerne le deuxième scénario, nous avons étudié le cas du simple-fournisseur avec plusieurstransporteurs disponibles pour servir un seul client. Dans ce scénario, des modèles avec les véhicules homogèneset hétérogènes sont étudiés. Tout le coût du système est calculé en additionnant de tout le coût de la livraison et destockage pour les différents clients et transporteurs qui se sont dans le système à étudier. Le nombre des produitsdans les lots peut être inégal et les lots sont limités seulement par la capacité du transporteur utilisé. Le coût destockage chez les clients dépend de chaque client, la distance entre le fournisseur et leurs clients dépend del'emplacement de client, qui est le cas du coût de livraison également qui dépend de l'emplacement du client. Dansle cas des multi transporteurs, le coût de livraison dépend du transporteur utilisé.Dans chaque modèle, nous présentons ce qui suit : procédures de solution pour résoudre chaque modèle, plusieursexemples numériques pour soutenir des résultats mathématiques et pour clarifier le problème, et comparaisons desperformances parmi différents résultats. La future extension de cette recherche peut considérer des contraintes detemps et de coût de chargement dans l'étape de production, la considération du tourné de véhicule avec un cout destockage chez les clients
The coordination of logistics activities in a supply chain has recently received a lot of attention in operationsmanagement research. In a typical supply chain, finished products are produced and either shipped to be temporarystorage or arrived directly on time to the customers. To achieve optimal operational performance, the coordinationand integration of production, delivery, and storage is an important consideration. The recent study consideredcustomer storage cost with fixed transportation cost or fixed batch size, which is unrealistic. In this thesis, we studythe coordinate of batching and scheduling activities, which includes the coordination of batching of products inbatches after the production stage, the coordination of scheduling, customer(s) orders which require the deliveryfrom the supplier, and the storage of products at the customer(s). This study focus on single-supplier/multi-customerscenario and single-supplier/multi-transporter scenario. For the first scenario with multi-customer, two modelsillustrate the transferring of batches to the customer. Where in the first model, we considered a singlesupplier/ multicustomerwith one capacitated transporter available to serve the customers without the vehicle routingconsideration. Then, in the second model, we considered a single-supplier/multi-customer with multi-transportavailable to serve the customers. In this case different assumption is proposed and compared in the last chapter.Concerning the second scenario, we studied the case of single-supplier with multi-transporter available to serve asingle customer. In this scenario, models with homogeneous and heterogeneous vehicles are studied. The totalsystem cost is calculated by summing the total delivery and storage cost for different customers and transporters inthe system. The number of products by batch is unequal and they are limited only by the capacity of the transporterused. The storage cost of the customers depends on the customer destination, the distance between the supplierand their customers depends on the customer location, which is the case of the delivery cost also which depends onthe customer¿s location. In the case of the multi-transporters, the delivery cost depends on the transporter used.In each model, we present the following: solution procedures to solve each model, many numerical examples tosupport mathematical findings and to clarify the problem under study, and performance comparisons amongdifferent findings. The future extension of this research may involve considering setup time and cost constraints inthe production stage, the vehicle routing consideration with inventory in the multi-customer case
19

Ortmann, Frank Gerald. "Modelling the South African fresh fruit export supply chain." Thesis, Stellenbosch : University of Stellenbosch, 2005. http://hdl.handle.net/10019.1/2745.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (MSc (Applied Mathematics))--University of Stellenbosch, 2005.
The process of modelling the fruit export infrastructure capacity of South Africa formed part of a larger project called the \Fruit Logistics Infrastructure Capacity Optimisation Study," which was coordinated by the Transportek division of the CSIR in Stellenbosch during the period August 2002 to March 2004. The aim of this project was to create efficiencies for, and enhance the competitiveness of, the South African fruit industry by improved usage of, and investment in, shared logistics infrastructure. After putting the size of the fruit industry into perspective, numerous aspects of the export process are considered in this thesis so as to be able to perform a comprehensive cost analysis of the export of fruit, including the cost of handling, cooling and transportation. The capacities of packhouses, cold stores and terminals are found and presented. This information, combined with fruit export volumes of 2003, then allow an estimation of the current utilisation of the South African ports with respect to fruit export.
20

Lissy, Pierre. "Sur la contrôlabilité et son coût pour quelques équations aux dérivées partielles." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2013. http://tel.archives-ouvertes.fr/tel-00918763.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans cette thèse, on s'intéresse à la contrôlabilité et son coût pour un certain nombre d'équations aux dérivées partielles linéaires ou non linéaires issues de la physique. La première partie de la thèse concerne la contrôlabilité à zéro de l'équation de Navier-Stokes tridimensionnelle avec conditions au bord de Dirichlet et contrôle interne distribué sur un sous-ouvert de domaine de définition n'agissant que sur une seule des trois équations. La preuve repose sur la méthode du retour ainsi que sur une méthode originale de résolution algébrique de systèmes différentiels inspirée de travaux de Gromov. La deuxième partie de la thèse concerne le coût du contrôle en temps petit ou en viscosité évanescente d'équations linéaires unidimensionnelles. Dans un premier temps, on montre que l'on peut, dans certains cas, faire un lien entre ces deux problèmes. Notamment il est possible d'obtenir des résultats de contrôlabilité uniforme de l'équation de transport-diffusion unidimensionnelle à coefficients constants contrôlée sur le bord gauche à l'aide de résultats déjà connus sur le contrôle de l'équation de la chaleur. Dans un second temps, on s'intéresse au coût du contrôle frontière en temps petit d'un certain nombre d'équations pour lesquelles l'opérateur spatial associé est autoadjoint ou anti-autoadjoint à résolvante compacte et ayant des valeurs propres se comportant de manière polynomiale, en utilisant la méthode des moments. On en déduit des résultats pour des équations de type Korteweg-de-Vries linéarisées, diffusion fractionnaire et Schrödinger fractionnaire.
21

Ouali, Abdelkader. "Méthodes hybrides parallèles pour la résolution de problèmes d'optimisation combinatoire : application au clustering sous contraintes." Thesis, Normandie, 2017. http://www.theses.fr/2017NORMC215/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Les problèmes d’optimisation combinatoire sont devenus la cible de nombreuses recherches scientifiques pour leur importance dans la résolution de problèmes académiques et de problèmes réels rencontrés dans le domaine de l’ingénierie et dans l’industrie. La résolution de ces problèmes par des méthodes exactes ne peut être envisagée à cause des délais de traitement souvent exorbitants que nécessiteraient ces méthodes pour atteindre la (les) solution(s) optimale(s). Dans cette thèse, nous nous sommes intéressés au contexte algorithmique de résolution des problèmes combinatoires, et au contexte de modélisation de ces problèmes. Au niveau algorithmique, nous avons appréhendé les méthodes hybrides qui excellent par leur capacité à faire coopérer les méthodes exactes et les méthodes approchées afin de produire rapidement des solutions. Au niveau modélisation, nous avons travaillé sur la spécification et la résolution exacte des problématiques complexes de fouille des ensembles de motifs en étudiant tout particulièrement le passage à l’échelle sur des bases de données de grande taille. D'une part, nous avons proposé une première parallélisation de l'algorithme DGVNS, appelée CPDGVNS, qui explore en parallèle les différents clusters fournis par la décomposition arborescente en partageant la meilleure solution trouvée sur un modèle maître-travailleur. Deux autres stratégies, appelées RADGVNS et RSDGVNS, ont été proposées qui améliorent la fréquence d'échange des solutions intermédiaires entre les différents processus. Les expérimentations effectuées sur des problèmes combinatoires difficiles montrent l'adéquation et l'efficacité de nos méthodes parallèles. D'autre part, nous avons proposé une approche hybride combinant à la fois les techniques de programmation linéaire en nombres entiers (PLNE) et la fouille de motifs. Notre approche est complète et tire profit du cadre général de la PLNE (en procurant un haut niveau de flexibilité et d’expressivité) et des heuristiques spécialisées pour l’exploration et l’extraction de données (pour améliorer les temps de calcul). Outre le cadre général de l’extraction des ensembles de motifs, nous avons étudié plus particulièrement deux problèmes : le clustering conceptuel et le problème de tuilage (tiling). Les expérimentations menées ont montré l’apport de notre proposition par rapport aux approches à base de contraintes et aux heuristiques spécialisées
Combinatorial optimization problems have become the target of many scientific researches for their importance in solving academic problems and real problems encountered in the field of engineering and industry. Solving these problems by exact methods is often intractable because of the exorbitant time processing that these methods would require to reach the optimal solution(s). In this thesis, we were interested in the algorithmic context of solving combinatorial problems, and the modeling context of these problems. At the algorithmic level, we have explored the hybrid methods which excel in their ability to cooperate exact methods and approximate methods in order to produce rapidly solutions of best quality. At the modeling level, we worked on the specification and the exact resolution of complex problems in pattern set mining, in particular, by studying scaling issues in large databases. On the one hand, we proposed a first parallelization of the DGVNS algorithm, called CPDGVNS, which explores in parallel the different clusters of the tree decomposition by sharing the best overall solution on a master-worker model. Two other strategies, called RADGVNS and RSDGVNS, have been proposed which improve the frequency of exchanging intermediate solutions between the different processes. Experiments carried out on difficult combinatorial problems show the effectiveness of our parallel methods. On the other hand, we proposed a hybrid approach combining techniques of both Integer Linear Programming (ILP) and pattern mining. Our approach is comprehensive and takes advantage of the general ILP framework (by providing a high level of flexibility and expressiveness) and specialized heuristics for data mining (to improve computing time). In addition to the general framework for the pattern set mining, two problems were studied: conceptual clustering and the tiling problem. The experiments carried out showed the contribution of our proposition in relation to constraint-based approaches and specialized heuristics
22

Lagnoux, Agnès. "Analyse des modeles de branchement avec duplication des trajectoires pour l'étude des événements rares." Toulouse 3, 2006. http://www.theses.fr/2006TOU30231.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Nous étudions, dans cette thèse, le modèle de branchement avec duplication des trajectoires d'abord introduit pour l'étude des événements rares destiné à accélérer la simulation. Dans cette technique, les échantillons sont dupliqués en R copies à différents niveaux pendant la simulation. L'optimisation de l'algorithme à coût fixé suggère de prendre les probabilités de transition entre les niveaux égales à une constante et de dupliquer un nombre égal à l'inverse de cette constante, nombre qui peut être non entier. Nous étudions d'abord la sensibilité de l'erreur relative entre la probabilité d'intérêt P(A) et son estimateur en fonction de la stratégie adoptée pour avoir des nombres de retirage entiers. Ensuite, puisqu'en pratique les probabilités de transition sont généralement inconnues (et de même pour les nombres de retirages), nous proposons un algorithme en deux étapes pour contourner ce problème. Des applications numériques et comparaisons avec d'autres modèles sont proposés
This thesis deals with the splitting method first introduced in rare event analysis in order to speed-up simulation. In this technique, the sample paths are split into R multiple copies at various stages during the simulation. Given the cost, the optimization of the algorithm suggests to take the transition probabilities between stages equal to some constant and to resample the inverse of that constant subtrials, which may be non-integer and even unknown but estimated. First, we study the sensitivity of the relative error between the probability of interest P(A) and its estimator depending on the strategy that makes the resampling numbers integers. Then, since in practice the transition probabilities are generally unknown (and so the optimal resampling umbers), we propose a two-steps algorithm to face that problem. Several numerical applications and comparisons with other models are proposed
23

Awour, Emmanuel Otieno. "A conceptual model for managing supply networks for simultaneous optimisation in a complex adaptive environment : a case study of the floriculture industry in Kenya." Thesis, 2012. http://hdl.handle.net/10500/6735.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis aims at developing a conceptual model for supply networks optimization in the floriculture industry in Kenya. In the literature review a detailed account of the evolution of supply chain management, the concept and the factors influencing simultaneous optimization of supply networks in the floriculture industry is given. The area of complex adaptive systems is explored and the link with the floriculture industry in Kenya is shown. A review of current studies is done on the subject of supply chain management and particularly the various conceptual frameworks/models developed by a number of researchers around the world. The supply chain performance measurement and the requirements for model building are also given. The research methodology provides the research paradigm and research design and discusses the justification of the approach taken for the study. The target population consisted of all active cut flower exporting firms by 31st December 2009 as per the information provided by Horticultural Crops Development Authority (HCDA). This target population comprising of 412 active exporters was stratified in terms of: large, International, local, embedded, unimpeded, small and medium scale enterprises. Sampling was done through census sampling technique, in which case the entire population was considered. Data analysis is also discussed including the various tests to be carried out in relation to validity and reliability of data. There is a detailed presentation of principal factor analysis results. Finally there is a detailed discussion on the ethical considerations in the conduct of my data collection and research process. Chapter four outlined the factors that are indeed useful to be considered when designing a conceptual model for managing supply networks for simultaneous optimization. Such factors included: country development; quality of inputs; financing; customer responsiveness; research and development. Also discussed are the factors that contribute to overall organizational performance which in this case included: return on trading investment, overall operational costs, overall productivity growth rates and outsourcing activities and decisions. The triple bottom line benefits encompassing environmental vi audit, financial audit and social audit have also been discussed in relation to country specific benefits in relation to the floriculture industry in Kenya. The revised conceptual model for simultaneous optimisation of supply networks in the floriculture industry is presented consisting of: key success factors; financing; information integration; country specific benefits; transport; research and development. These are the factors which contribute to enhancing performance of the floriculture industry in Kenya. The conclusion and recommendations of the study are made on the basis of these factors.
Business Management
D.B.L. (Business Leadership)
24

Rossi, F., F. Manenti, C. Pirola, and Iqbal M. Mujtaba. "A robust sustainable optimization & control strategy (RSOCS) for (fed-)batch processes towards the low-cost reduction of utilities consumption." 2015. http://hdl.handle.net/10454/7964.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Yes
The need for the development of clean but still profitable processes and the study of low environmental impact and economically convenient management policies for them are two challenges for the years to come. This paper tries to give a first answer to the second of these needs, limited to the area of discontinuous productions. It deals with the development of a robust methodology for the profitable and clean management of (fed-)batch units under uncertainty, which can be referred to as a robust sustainability-oriented model-based optimization & control strategy. This procedure is specifically designed to ensure elevated process performances along with low-cost utilities usage reduction in real-time, simultaneously allowing for the effect of any external perturbation. In this way, conventional offline methods for process sustainable optimization can be easily overcome since the most suitable management policy, aimed at process sustainability, can be dynamically determined and applied in any operating condition. This leads to a significant step forward with respect to the nowadays options in terms of sustainable process management, that drives towards a cleaner and more energy-efficient future. The proposed theoretical framework is validated and tested on a case study based on the well-known fed-batch version of the Williams-Otto process to demonstrate its tangible benefits. The results achieved in this case study are promising and show that the framework is very effective in case of typical process operation while it is partially effective in case of unusual/unlikely critical process disturbances. Future works will go towards the removal of this weakness and further improvement in the algorithm robustness.
25

Anton, Arun V. "Choice of discount rate and agency cost minimisation in capital budgeting: analytical review and modelling approaches." Thesis, 2019. https://vuir.vu.edu.au/40447/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Capital budgeting is a crucial business function and most large firms use Discounted Cash Flow (DCF) methods, particularly the Net Present Value (NPV) method which takes into account the time value of money, for evaluating investment projects. Hence, the discount rate plays a major role in the choice of capital investments, and both the selection and appropriate use of a suitable discount rate are critical to sound capital budgeting. Extensive evidence from the literature indicates that agency problems exist in capital budgeting decisions, both when choosing and when using a discount rate for this process. Managers as agents can manipulate the choice of the discount rate to maximise their own benefits. This creates an agency problem that has impacts on efficient capital investment decisions. Most firms believe that using project-specific discount rates may open up incentives for managerial opportunistic behaviour and hence they prefer firm-wide single discount rates that might moderate the managerial bias. In other words, most firms use their company-wide Weighted Average Cost of Capital (WACC) to evaluate all of their capital projects. However, company-wide WACC is not a correct approach, in that it may lead to the selection of high-risk, unprofitable projects and hence to inefficient allocation of resources. This creates a need for a systematic and verifiable method to establish project- specific discount rates. If possible, the determination of these project-specific discount rates should be tied to outside market forces that are not under the control of the manager. But the selection of suitable project-specific discount rates alone may not completely minimise agency costs, as managers’ can manipulate capital budgeting decisions to maximise their benefits. Hence, an appropriate capital budgeting framework that can further minimise agency costs and maximise company value is required. The main aims of the study are to develop a process to select appropriate project- specific discount rates that minimise agency costs and to develop a better capital budgeting framework to further minimise agency costs in capital budgeting. Such a framework should provide management incentives to achieve efficient capital budgeting outcomes leading to enhanced company value
26

Hewitson, Christopher Michael. "Optimisation of water distribution systems using genetic algorithms for hydraulic and water quality issues / by Christopher Michael Hewitson." Thesis, 1999. http://hdl.handle.net/2440/19536.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Develops a framework balancing water quality costs resulting from waterborne disease, disinfection by-product exposure and aesthetic concerns, against hydraulic costs, which include pipes, pumps and tanks. The genetic algorithms developed, successfully obtained the current optimal hydraulic solution, before adapting the model to incorporate water quality issues.
Thesis (Ph.D.) -- University of Adelaide, Dept. of Civil and Environmental Engineering, 2000
27

Hewitson, Christopher Michael. "Optimisation of water distribution systems using genetic algorithms for hydraulic and water quality issues / by Christopher Michael Hewitson." 1999. http://hdl.handle.net/2440/19536.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Corrigenda pasted onto front end paper.
One folded col. map in pocket on back endpaper.
Bibliography: leaves 348-368.
xx, 368 leaves : ill. (some col.), maps (some col.) ; 30 cm.
Title page, contents and abstract only. The complete thesis in print form is available from the University Library.
Develops a framework balancing water quality costs resulting from waterborne disease, disinfection by-product exposure and aesthetic concerns, against hydraulic costs, which include pipes, pumps and tanks. The genetic algorithms developed, successfully obtained the current optimal hydraulic solution, before adapting the model to incorporate water quality issues.
Thesis (Ph.D.)--University of Adelaide, Dept. of Civil and Environmental Engineering, 2000
28

Van, Graan Sebastian Jan. "Network configuration improvement and design aid using artificial intelligence." Diss., 2008. http://hdl.handle.net/2263/27625.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This dissertation investigates the development of new Global system for mobile communications (GSM) improvement algorithms used to solve the nondeterministic polynomial-time hard (NP-hard) problem of assigning cells to switches. The departure of this project from previous projects is in the area of the GSM network being optimised. Most previous projects tried minimising the signalling load on the network. The main aim in this project is to reduce the operational expenditure as much as possible while still adhering to network element constraints. This is achieved by generating new network configurations with a reduced transmission cost. Since assigning cells to switches in cellular mobile networks is a NP-hard problem, exact methods cannot be used to solve it for real-size networks. In this context, heuristic approaches, evolutionary search algorithms and clustering techniques can, however, be used. This dissertation presents a comprehensive and comparative study of the above-mentioned categories of search techniques adopted specifically for GSM network improvement. The evolutionary search technique evaluated is a genetic algorithm (GA) while the unsupervised learning technique is a Gaussian mixture model (GMM). A number of custom-developed heuristic search techniques with differing goals were also experimented with. The implementation of these algorithms was tested in order to measure the quality of the solutions. Results obtained confirmed the ability of the search techniques to produce network configurations with a reduced operational expenditure while still adhering to network element constraints. The best results found were using the Gaussian mixture model where savings of up to 17% were achieved. The heuristic searches produced promising results in the form of the characteristics they portray, for example, load-balancing. Due to the massive problem space and a suboptimal chromosome representation, the genetic algorithm struggled to find high quality viable solutions. The objective of reducing network cost was achieved by performing cell-to-switch optimisation taking traffic distributions, transmission costs and network element constraints into account. These criteria cannot be divorced from each other since they are all interdependent, omitting any one of them will lead to inefficient and infeasible configurations. Results obtained further indicated that the search space consists out of two components namely, traffic and transmission cost. When optimising, it is very important to consider both components simultaneously, if not, infeasible or suboptimum solutions are generated. It was also found that pre-processing has a major impact on the cluster-forming ability of the GMM. Depending on how the pre-processing technique is set up, it is possible to bias the cluster-formation process in such a way that either transmission cost savings or a reduction in inter base station controller/switching centre traffic volume is given preference. Two of the difficult questions to answer when performing network capacity expansions are where to install the remote base station controllers (BSCs) and how to alter the existing BSC boundaries to accommodate the new BSCs being introduced. Using the techniques developed in this dissertation, these questions can now be answered with confidence.
Dissertation (MEng)--University of Pretoria, 2008.
Electrical, Electronic and Computer Engineering
unrestricted

До бібліографії