Добірка наукової літератури з теми "Cost model optimisation"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Cost model optimisation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Cost model optimisation":

1

Sahay, S. S., and K. Mitra. "Cost Model Based Optimisation Of Carburising Operation." Surface Engineering 20, no. 5 (October 2004): 379–84. http://dx.doi.org/10.1179/026708404x1143.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Yeo, S. H., B. K. A. Ngoi, and H. Chen. "A cost-tolerance model for process sequence optimisation." International Journal of Advanced Manufacturing Technology 12, no. 6 (November 1996): 423–31. http://dx.doi.org/10.1007/bf01186931.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Khalil, Jean, Sameh M. Saad, and Nabil Gindy. "An integrated cost optimisation maintenance model for industrial equipment." Journal of Quality in Maintenance Engineering 15, no. 1 (March 27, 2009): 106–18. http://dx.doi.org/10.1108/13552510910943912.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Pati, Rupesh Kumar, Prem Vrat, and Pradeep Kumar. "Cost optimisation model in recycled waste reverse logistics system." International Journal of Business Performance Management 6, no. 3/4 (2004): 245. http://dx.doi.org/10.1504/ijbpm.2004.005631.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Glavan, Miha, Dejan Gradišar, Serena Invitto, Iztok Humar, Ðani Juričić, Cesare Pianese, and Damir Vrančić. "Cost optimisation of supermarket refrigeration system with hybrid model." Applied Thermal Engineering 103 (June 2016): 56–66. http://dx.doi.org/10.1016/j.applthermaleng.2016.03.177.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Delelegn, S. W., A. Pathirana, B. Gersonius, A. G. Adeogun, and K. Vairavamoorthy. "Multi-objective optimisation of cost–benefit of urban flood management using a 1D2D coupled model." Water Science and Technology 63, no. 5 (March 1, 2011): 1053–59. http://dx.doi.org/10.2166/wst.2011.290.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper presents a multi-objective optimisation (MOO) tool for urban drainage management that is based on a 1D2D coupled model of SWMM5 (1D sub-surface flow model) and BreZo (2D surface flow model). This coupled model is linked with NSGA-II, which is an Evolutionary Algorithm-based optimiser. Previously the combination of a surface/sub-surface flow model and evolutionary optimisation has been considered to be infeasible due to the computational demands. The 1D2D coupled model used here shows a computational efficiency that is acceptable for optimisation. This technological advance is the result of the application of a triangular irregular discretisation process and an explicit finite volume solver in the 2D surface flow model. Besides that, OpenMP based parallelisation was employed at optimiser level to further improve the computational speed of the MOO tool. The MOO tool has been applied to an existing sewer network in West Garforth, UK. This application demonstrates the advantages of using multi-objective optimisation by providing an easy-to-comprehend Pareto-optimal front (relating investment cost to expected flood damage) that could be used for decision making processes, without repeatedly going through the modelling–optimisation stage.
7

March, A., K. Willcox, and Q. Wang. "Gradient-based multifidelity optimisation for aircraft design using Bayesian model calibration." Aeronautical Journal 115, no. 1174 (December 2011): 729–38. http://dx.doi.org/10.1017/s0001924000006473.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Optimisation of complex systems frequently requires evaluating a computationally expensive high-fidelity function to estimate a system metric of interest. Although design sensitivities may be available through either direct or adjoint methods, the use of formal optimisation methods may remain too costly. Incorporating low-fidelity performance estimates can substantially reduce the cost of the high-fidelity optimisation. In this paper we present a provably convergent multifidelity optimisation method that uses Cokriging Bayesian model calibration and first-order consistent trust regions. The technique is compared with a single-fidelity sequential quadratic programming method and a conventional first-order trust-region method on both a two-dimensional structural optimisation and an aerofoil design problem. In both problems adjoint formulations are used to provide inexpensive sensitivity information.
8

Rybakov, Dmitriy S. "Total cost optimisation model for logistics systems of trading companies." International Journal of Logistics Systems and Management 27, no. 3 (2017): 318. http://dx.doi.org/10.1504/ijlsm.2017.084469.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Rybakov, Dmitriy S. "Total cost optimisation model for logistics systems of trading companies." International Journal of Logistics Systems and Management 27, no. 3 (2017): 318. http://dx.doi.org/10.1504/ijlsm.2017.10005118.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Fafandjel, Nikša, Albert Zamarin, and Marko Hadjina. "Shipyard production cost structure optimisation model related to product type." International Journal of Production Research 48, no. 5 (January 28, 2009): 1479–91. http://dx.doi.org/10.1080/00207540802609665.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Cost model optimisation":

1

Wang, Mengyu. "Model-based Optimisation of Mixed Refrigerant LNG Processes." Thesis, The University of Sydney, 2017. http://hdl.handle.net/2123/17387.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Natural gas liquefaction processes are energy and cost intensive. This thesis pursues the optimisation of propane precooled mixed refrigerant (C3MR) processes considering variations in upstream gas well conditions, in order to maximise gas well life. Four objective functions were selected for the design optimisation of the C3MR and dual mixed refrigerant (DMR) processes: 1) total shaft work (W), 2) total capital investment, 3) total annualised cost, and 4) total capital cost of both compressors and main cryogenic heat exchanger (MCHE). Optimisation results show that objective function 4 is more suitable than other objective functions for reducing both W and UA (MCHE design parameter). This leads to 15% reduction in specific power for C3MR and 27% for DMR, while achieving lower UA values relative to baseline. The operation optimisation of the C3MR process and its split propane version (C3MR-SP) was performed using four objective functions: 1) total shaft work, 2-3) two different exergy efficiency expressions, and 4) operating expenditure (OPEX). Objective function 3 results in the lowest specific shaft work 1469 MJ/tonne-LNG. For C3MR-SP, however, the lowest specific shaft work is found to be under objective function 1. A comparison of optimisation results across literature studies is impractical due to dissimilar process conditions, feed gas conditions, product quality, and equipment size. A sensitivity analysis highlights the effect of feed gas conditions on performance of the C3MR. For instance, as LNG production decreases from 3 MTPA to 2.4 MTPA over time, the specific OPEX increases from $128/tonne-LNG to $154/tonne-LNG. A subsequent study was conducted focusing on energy benefits of two configurations: integrating natural gas liquids (NGL) recovery unit with C3MR. An integrated NGL recovery within C3MR shows a 0.74% increase in energy consumption as methane concentration of the feed gas decreases, however a frontend NGL recovery unit only has a 0.18% decrease.
2

Viduto, Valentina. "A risk assessment and optimisation model for minimising network security risk and cost." Thesis, University of Bedfordshire, 2012. http://hdl.handle.net/10547/270440.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Network security risk analysis has received great attention within the scientific community, due to the current proliferation of network attacks and threats. Although, considerable effort has been placed on improving security best practices, insufficient effort has been expanded on seeking to understand the relationship between risk-related variables and objectives related to cost-effective network security decisions. This thesis seeks to improve the body of knowledge focusing on the trade-offs between financial costs and risk while analysing the impact an identified vulnerability may have on confidentiality, integrity and availability (CIA). Both security best practices and risk assessment methodologies have been extensively investigated to give a clear picture of the main limitations in the area of risk analysis. The work begins by analysing information visualisation techniques, which are used to build attack scenarios and identify additional threats and vulnerabilities. Special attention is paid to attack graphs, which have been used as a base to design a novel visualisation technique, referred to as an Onion Skin Layered Technique (OSLT), used to improve system knowledge as well as for threat identification. By analysing a list of threats and vulnerabilities during the first risk assessment stages, the work focuses on the development of a novel Risk Assessment and Optimisation Model (RAOM), which expands the knowledge of risk analysis by formulating a multi-objective optimisation problem, where objectives such as cost and risk are to be minimised. The optimisation routine is developed so as to accommodate conflicting objectives and to provide the human decision maker with an optimum solution set. The aim is to minimise the cost of security countermeasures without increasing the risk of a vulnerability being exploited by a threat and resulting in some impact on CIA. Due to the multi-objective nature of the problem a performance comparison between multi-objective Tabu Search (MOTS) Methods, Exhaustive Search and a multi-objective Genetic Algorithm (MOGA) has been also carried out. Finally, extensive experimentation has been carried out with both artificial and real world problem data (taken from the case study) to show that the method is capable of delivering solutions for real world problem data sets.
3

Burnett, Robert Carlisle. "A trade-off model between cost and reliability during the design phase of software development." Thesis, University of Newcastle Upon Tyne, 1995. http://hdl.handle.net/10443/2104.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This work proposes a method for estimating the development cost of a software system with modular structure taking into account the target level of reliability for that system. The required reliability of each individual module is set in order to meet the overall required reliability of the system. Consequently the individual cost estimates for each module and the overall cost of the software system are linked to the overall required reliability. Cost estimation is carried out during the early design phase, that is, well in advance of any detailed development. Where a satisfactory compromise between cost and reliability is feasible, this will enable a project manager to plan the allocation of resources to the implementation and testing phases so that the estimated total system cost does not exceed the project budget and the estimated system reliability matches the required target. The line of argument developed here is that the operational reliability of a software module can be linked to the effort spent during the testing phase. That is, a higher level of desired reliability will require more testing effort and will therefore cost more. A method is developed which enable us to estimate the cost of development based on an estimate of the number of faults to be found and fixed, in order to achieve the required reliability, using data obtained from the requirements specification and historical data. Using Markov analysis a method is proposed for allocating an appropriate reliability requirement to each module of a modular software system. A formula to calculate an estimate of the overall system reliability is established. Using this formula, a procedure to allocate the reliability requirement for each module is derived using a minimization process, which takes into account the stipulated overall required level of reliability. This procedure allow us to construct some scenarios for cost and the overall required reliability. The foremost application of the outcome of this work is to establish a basis for a trade-off model between cost and reliability during the design phase of the development of a modular software system. The proposed model is easy to understand and suitable for use by a project manager.
4

Wang, Lina. "A multi-disciplinary optimisation model for passenger aircraft wing structures with manufacturing and cost considerations." Thesis, University of Salford, 2000. http://usir.salford.ac.uk/26957/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In traditional aircraft wing structural design, the emphasis has been on pursuing the minimum weight or improved performance. The manufacturing complexity or cost assessments are rarely considered because it is usually assumed that the minimum weight design is also the minimum cost design. However, experience from industry has shown that this is not necessarily the case. It has been realised that in the cases where no manufacturing constraints are imposed, the extra machining cost can erode the advantages of the reduced weight. As manufacturing cost includes material cost and machining cost, whilst reducing weight can reduce the material cost, if the manufacturing complexity increases greatly as a result the overall cost may not go down. Indeed, if the manufacturing complexity is not checked, the machining cost could increase by more than the amount by which the material cost reduces. To enable the structural manufacturing complexity to be controlled, manufacturing constraints are established in this thesis and integrated into the optimisation of the aircraft wing structural design. As far as the manufacturing complexity is concerned, attention has been paid to both 3-axis and 5-axis machining. The final designs of optimisations with manufacturing constraints prove the efficiency of these constraints in guiding the design in the manufacturing-feasible direction.
5

Low, Wei Zhe. "Towards cost model-driven log-based business process improvement." Thesis, Queensland University of Technology, 2016. https://eprints.qut.edu.au/97727/1/Wei%20Zhe_Low_Thesis.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This doctoral study focused on analysing business process execution histories to initiate evidence-based business process improvement activities. The researcher developed techniques to explore and visualise better ways of executing a business process, as well as to analyse the impact of the changes towards cost reduction. This research enables organisations to gain a better understanding of how the same business process can be performed in a more efficient manner, taking into consideration the trade-offs between processing time, cost, and employee utilisation.
6

Moberg, Pontus, and Filip Svensson. "Cost Optimisation through Statistical Quality Control : A case study on the plastic industry." Thesis, Blekinge Tekniska Högskola, Institutionen för industriell ekonomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21922.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background. Shewhart was the first to describe the possibilities that come with having a statistically robust process in 1924. Since his discovery, the importance of a robust process became more apparent and together with the consequences of an unstable process. A firm with a manufacturing process that is out of statistical control tends to waste money, increase risks, and provide an uncertain quality to its customers. The framework of Statistical Quality Control has been developed since its founding, and today it is a well-established tool used in several industries with successful results. When it was first thought of, complicated calculations had to be performed and was performed manually. With digitalisation, the quality tools can be used in real-time, providing high-precision accuracy on the quality of the product. Despite this, not all firms nor industries have started using these tools as of today.    The costs that occur in relation to the quality, either as a consequence of maintaining good quality or that arises from poor quality, are called Cost of Quality. These are often displayed through one of several available cost models. In this thesis, we have created a cost model that was heavily inspired by the P-A-F model. Several earlier studies have shown noticeable results by using SPC, COQ or a combination of them both.     Objectives. The objective of this study is to determine if cost optimisation could be utilised through SQC implementation. The cost optimisation is a consequence of an unstable process and the new way of thinking that comes with SQC. Further, it aims to explore the relationship between cost optimisation and SQC. Adding a layer of complexity and understanding to the spread of Statistical Quality Tools and their importance for several industries. This will contribute to tightening the bonds of production economics, statistical tools and quality management even further.   Methods. This study made use of two closely related methodologies, combining SPC with Cost of Quality. The combination of these two hoped to demonstrate a possible cost reduction through stabilising the process. The cost reduction was displayed using an optimisation model based on the P-A-F (Prevention, Appraisal, External Failure and Internal Failure) and further developed by adding a fifth parameter for optimising materials (OM). Regarding whether the process was in control or not, we focused on the thickness of the PVC floor, 1008 data points over three weeks were retrieved from the production line, and by analysing these, a conclusion on whether the process was in control could be drawn.    Results. Firstly, none of the three examined weeks were found to be in statistical control, and therefore, nor were the total sample. Through the assumption of the firm achieving 100% statistical control over their production process, a possible cost reduction of 874 416 SEK yearly was found.    Conclusions. This study has proven that through focusing on stabilising the production process and achieving control over their costs related to quality, possible significant yearly savings can be achieved. Furthermore, an annual cost reduction was found by optimising the usage of materials by relocating the ensuring of thickness variation from post-production to during the production.
7

Liu, Tianxao. "Proposition d'un cadre générique d'optimisation de requêtes dans les environnements hétérogènes et répartis." Thesis, Cergy-Pontoise, 2011. http://www.theses.fr/2011CERG0513.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans cette thèse, nous proposons un cadre générique d'optimisation de requêtes dans les environnements hétérogènes répartis. Nous proposons un modèle générique de description de sources (GSD), qui permet de décrire tous les types d'informations liées au traitement et à l'optimisation de requêtes. Avec ce modèle, nous pouvons en particulier obtenir les informations de coût afin de calculer le coût des différents plans d'exécution. Notre cadre générique d'optimisation fournit les fonctions unitaires permettant de mettre en œuvre les procédures d'optimisation en appliquant différentes stratégies de recherche. Nos résultats expérimentaux mettent en évidence la précision du calcul de coût avec le modèle GSD et la flexibilité de notre cadre générique d'optimisation lors du changement de stratégie de recherche. Notre cadre générique d'optimisation a été mis en œuvre et intégré dans un produit d'intégration de données (DVS) commercialisé par l'entreprise Xcalia - Progress Software Corporation. Pour des requêtes contenant beaucoup de jointures inter-site et interrogeant des sources de grand volume, le temps de calcul du plan optimal est de l'ordre de 2 secondes et le temps d'exécution du plan optimal est réduit de 28 fois par rapport au plan initial non optimisé
This thesis proposes a generic framework for query optimization in heterogeneous and distributed environments. We propose a generic source description model (GSD), which allows describing any type of information related to query processing and optimization. With GSD, we can use cost information to calculate the costs of execution plans. Our generic framework for query optimization provides a set of unitary functions used to perform optimization by applying different search strategies. Our experimental results show the accuracy of cost calculus when using GSD, and the flexibility of our generic framework when changing search strategies. Our proposed approach has been implemented and integrated in a data integration product (DVS) licensed by Xcalia – Progress Software Corporation. For queries with many inter-site joins accessing large size data sources, the time used for finding the optimal plan is in the order of 2 seconds, and the execution time of the optimized plan is reduced by 28 times, as compared with the execution time of the non optimized original plan
8

Belghoul, Abdeslem. "Optimizing Communication Cost in Distributed Query Processing." Thesis, Université Clermont Auvergne‎ (2017-2020), 2017. http://www.theses.fr/2017CLFAC025/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans cette thèse, nous étudions le problème d’optimisation du temps de transfert de données dans les systèmes de gestion de données distribuées, en nous focalisant sur la relation entre le temps de communication de données et la configuration du middleware. En réalité, le middleware détermine, entre autres, comment les données sont divisées en lots de F tuples et messages de M octets avant d’être communiqués à travers le réseau. Concrètement, nous nous concentrons sur la question de recherche suivante : étant donnée requête Q et l’environnement réseau, quelle est la meilleure configuration de F et M qui minimisent le temps de communication du résultat de la requête à travers le réseau?A notre connaissance, ce problème n’a jamais été étudié par la communauté de recherche en base de données.Premièrement, nous présentons une étude expérimentale qui met en évidence l’impact de la configuration du middleware sur le temps de transfert de données. Nous explorons deux paramètres du middleware que nous avons empiriquement identifiés comme ayant une influence importante sur le temps de transfert de données: (i) la taille du lot F (c’est-à-dire le nombre de tuples dans un lot qui est communiqué à la fois vers une application consommant des données) et (ii) la taille du message M (c’est-à-dire la taille en octets du tampon du middleware qui correspond à la quantité de données à transférer à partir du middleware vers la couche réseau). Ensuite, nous décrivons un modèle de coût permettant d’estimer le temps de transfert de données. Ce modèle de coût est basé sur la manière dont les données sont transférées entre les noeuds de traitement de données. Notre modèle de coût est basé sur deux observations cruciales: (i) les lots et les messages de données sont communiqués différemment sur le réseau : les lots sont communiqués de façon synchrone et les messages dans un lot sont communiqués en pipeline (asynchrone) et (ii) en raison de la latence réseau, le coût de transfert du premier message d’un lot est plus élevé que le coût de transfert des autres messages du même lot. Nous proposons une stratégie pour calibrer les poids du premier et non premier messages dans un lot. Ces poids sont des paramètres dépendant de l’environnement réseau et sont utilisés par la fonction d’estimation du temps de communication de données. Enfin, nous développons un algorithme d’optimisation permettant de calculer les valeurs des paramètres F et M qui fournissent un bon compromis entre un temps optimisé de communication de données et une consommation minimale de ressources. L’approche proposée dans cette thèse a été validée expérimentalement en utilisant des données issues d’une application en Astronomie
In this thesis, we take a complementary look to the problem of optimizing the time for communicating query results in distributed query processing, by investigating the relationship between the communication time and the middleware configuration. Indeed, the middleware determines, among others, how data is divided into batches and messages before being communicated over the network. Concretely, we focus on the research question: given a query Q and a network environment, what is the best middleware configuration that minimizes the time for transferring the query result over the network? To the best of our knowledge, the database research community does not have well-established strategies for middleware tuning. We present first an intensive experimental study that emphasizes the crucial impact of middleware configuration on the time for communicating query results. We focus on two middleware parameters that we empirically identified as having an important influence on the communication time: (i) the fetch size F (i.e., the number of tuples in a batch that is communicated at once to an application consuming the data) and (ii) the message size M (i.e., the size in bytes of the middleware buffer, which corresponds to the amount of data that can be communicated at once from the middleware to the network layer; a batch of F tuples can be communicated via one or several messages of M bytes). Then, we describe a cost model for estimating the communication time, which is based on how data is communicated between computation nodes. Precisely, our cost model is based on two crucial observations: (i) batches and messages are communicated differently over the network: batches are communicated synchronously, whereas messages in a batch are communicated in pipeline (asynchronously), and (ii) due to network latency, it is more expensive to communicate the first message in a batch compared to any other message that is not the first in its batch. We propose an effective strategy for calibrating the network-dependent parameters of the communication time estimation function i.e, the costs of first message and non first message in their batch. Finally, we develop an optimization algorithm to effectively compute the values of the middleware parameters F and M that minimize the communication time. The proposed algorithm allows to quickly find (in small fraction of a second) the values of the middleware parameters F and M that translate a good trade-off between low resource consumption and low communication time. The proposed approach has been evaluated using a dataset issued from application in Astronomy
9

Verrecht, Bart. "Optimisation of a hollow fibre membrane bioreactor for water reuse." Thesis, Cranfield University, 2010. http://dspace.lib.cranfield.ac.uk/handle/1826/6779.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Over the last two decades, implementation of membrane bioreactors (MBRs) has increased due to their superior effluent quality and low plant footprint. However, they are still viewed as a high-cost option, both with regards to capital and operating expenditure (capex and opex). The present thesis extends the understanding of the impact of design and operational parameters of membrane bioreactors on energy demand, and ultimately whole life cost. A simple heuristic aeration model based on a general algorithm for flux vs. aeration shows the benefits of adjusting the membrane aeration intensity to the hydraulic load. It is experimentally demonstrated that a lower aeration demand is required for sustainable operation when comparing 10:30 to continuous aeration, with associated energy savings of up to 75%, without being penalised in terms of the fouling rate. The applicability of activated sludge modelling (ASM) to MBRs is verified on a community-scale MBR, resulting in accurate predictions of the dynamic nutrient profile. Lastly, a methodology is proposed to optimise the energy consumption by linking the biological model with empirical correlations for energy demand, taking into account of the impact of high MLSS concentrations on oxygen transfer. The determining factors for costing of MBRs differ significantly depending on the size of the plant. Operational cost reduction in small MBRs relies on process robustness with minimal manual intervention to suppress labour costs, while energy consumption, mainly for aeration, is the major contributor to opex for a large MBR. A cost sensitivity analysis shows that other main factors influencing the cost of a large MBR, both in terms of capex and opex, are membrane costs and replacement interval, future trends in energy prices, sustainable flux, and the average plant utilisation which depends on the amount of contingency built in to cope with changes in the feed flow.
10

Antomarchi, Anne-Lise. "Conception et pilotage d'un atelier intégrant la fabrication additive." Thesis, Université Clermont Auvergne‎ (2017-2020), 2019. http://www.theses.fr/2019CLFAC035/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
La fabrication additive est un domaine en plein essor. Cependant, les industriels sont aujourd’hui dans une phase d’interrogation sur l’utilisation de ce procédé dans le cadre d’une production de masse. La problématique posée dans le cadre de ces travaux de recherche est : Comment rendre viable, industriellement, le procédé de fusion sur lit de poudre ? Nos travaux abordent la conception et le pilotage d’ateliers intégrant la fabrication additive et le processus complet d’obtention de la pièce selon les trois niveaux de décision : stratégique, tactique et opérationnel. D’un point du vue stratégique, des décisions fortes d’investissement, de sélection de machines et de choix d’organisation sont à prendre avec des enjeux économiques importants. L’objectif est de définir une méthode d’optimisation multicritère pour la conception modulaire d’un système de production intégrant la fabrication additive en présence de données incertaines, optimale sur le long terme et sur le court terme. D’un point de vue tactique, toutes les pièces ne sont pas forcément des candidates pertinentes pour la fabrication additive. Dans ces travaux, nous avons développé un outil d’aide à la décision qui évalue la pertinence ou non de la fabrication additive pour l’obtention des pièces dans une approche globale des coûts. Au niveau opérationnel, nous proposons un outil basé sur la simulation de flux qui permet de passer des commandes aux ordres de fabrication et leur ordonnancement de manière à garantir l’efficience de l’atelier. Ces travaux de recherche sont développés en lien avec des acteurs du monde industriel : AddUp, MBDA et Dassault qui alimentent nos travaux et nous permettent de confronter nos outils à une réalité industrielle
The additive manufacturing is a field on the rise. However, companies wonder about the use of additive manufacturing for mass production. The problem raised in the context of this thesis is: How to make the process of sintering laser melting industrially viable? Our work focuses on the design and on the management of workshops integrating the additive manufacturing and of the complete process to obtain part according to three levels of decision: strategic, tactic and operational. About the strategic level, strong decisions of investment, machines selection and organization choice are taken with important economic issues. The aim is to define a multicriteria optimization method for the modular design of a production system integrating the additive manufacturing in the presence of uncertain data, optimal in the long term and the short term. From a tactical point of view, not all parts are necessarily relevant candidates for additive manufacturing. In this work, we developed a decision support tool that evaluates the relevance or not of additive manufacturing to obtain parts in a global cost approach. At the operational level, we offer a tool based on flow simulation that allows orders to be placed to production orders and their scheduling in order to guarantee the efficiency of the workshop. This research work is developed in collaboration with companies: AddUp, MBDA and Dassault, who contribute to our work and enable us to compare our tools with an industrial reality

Книги з теми "Cost model optimisation":

1

(Editor), Alexandre Dolgui, Jerzy Soldek (Editor), and Oleg Zaikin (Editor), eds. Supply Chain Optimisation: Product/Process Design, Facility Location and Flow Control (Applied Optimization). Springer, 2004.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Dolgui, Alexandre, Oleg Zaikin, and Jerzy Soldek. Supply Chain Optimisation: Product/Process Design, Facility Location and Flow Control. Springer London, Limited, 2006.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Dolgui, Alexandre, Oleg Zaikin, and Jerzy Soldek. Supply Chain Optimisation: Product/Process Design, Facility Location and Flow Control. Springer, 2014.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Cost model optimisation":

1

Ashraf, R. J., Jonathan D. Nixon, and J. Brusey. "Multi-objective Optimisation of a Wastewater Anaerobic Digestion System." In Springer Proceedings in Energy, 265–74. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30960-1_25.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThis paper looks at multi-objective optimisation of a wastewater AD system where the model is demonstrated for a case study plant. Anaerobic Digestion Model No. 1 (ADM1) was used to predict biogas yields from the digester. Interviews, with plant owners, and plant data were used to identify the objective functions and decision variables. The decision variables were defined to be the substrate feeding rate for each of the digesters and the ratio of biogas sent between a combined heat and power (CHP) plant and a biogas upgrading unit (BUU). The objectives set were to maximise the overall substrate feeding rate through the AD plant, maximise the overall energy output and minimise the running cost of the plant. Results from the optimisation study showed that the amount of sludge processed through the AD plant increased by 17.7% and the running cost of the plant reduced by 6.2%. These results demonstrate how performance of AD plants can be significantly improved by multi-objective optimisation techniques.
2

Huang, G., Y. Zhuge, T. Benn, and Y. Liu. "Optimisation of Limestone Calcined Clay Cement Based on Response Surface Method." In Lecture Notes in Civil Engineering, 103–12. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-3330-3_13.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractLimestone calcined clay cement (LC3) is a new type of cement that contains Portland cement, calcined clay, and limestone. Compared with traditional cement clinker, LC3 reduces CO2 emissions by up to 40%, and is a promising technology for the cement industry to achieve its emission target. We used a numerical approach to predict the optimum composition of LC3 mortar. The experiments were performed using central composite rotational design under the response surface methodology. The method combined the design of mixtures and multi-response statistical optimization, in which the 28-day compressive strength was maximized while the CO2 emissions and materials cost were simultaneously minimized. The model with a nonsignificant lack of fit and a high coefficient of determination (R2) revealed a well fit and adequacy of the quadratic regression model to predict the performance of LC3 mixtures. An optimum LC3 mixture can be achieved with 43.4% general purpose cement, 34.16% calcined clay, 20.6% limestone and 1.94% gypsum.
3

Wilkie, Bernard, Karla Muñoz Esquivel, and Jamie Roche. "An LSTM Framework for the Effective Screening of Dementia for Deployment on Edge Devices." In Communications in Computer and Information Science, 21–37. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-59080-1_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractDementia is a series of neurodegenerative disorders that affect 1 in 4 people over the age of 80 and can greatly reduce the quality of life of those afflicted. Alzheimer’s disease (AD) is the most common variation, accounting for roughly 60% of cases. The current financial cost of these diseases is an estimated $1.3 trillion per year. While treatments are available to help patients maintain their mental function and slow disease progression, many of those with AD are asymptomatic in the early stages, resulting in late diagnosis. The addition of the routine testing needed for an effective level of early diagnosis would put a costly burden on both patients and healthcare systems. This research proposes a novel framework for the modelling of dementia, designed for deployment in edge hardware. This work extracts a wide variety of thoroughly researched Electroencephalogram (EEG) features, and through extensive feature selection, model testing, tuning, and edge optimization, we propose two novel Long Short-Term Memory (LSTM) neural networks. The first, uses 4 EEG sensors and can classify AD and Frontotemporal Dementia from cognitively normal (CN) subjects. The second, requires 3 EEG sensors and can classify AD from CN subjects. This is achieved with optimisation that reduces the model size by 83×, latency by 3.7×, and performs with an accuracy of 98%. Comparative analysis with existing research shows this performance exceeds current less portable techniques. The deployment of this model in edge hardware could aid in routine testing, providing earlier diagnosis of dementia, reducing the strain on healthcare systems, and increasing the quality of life for those afflicted with the disease.
4

Rakshit, D. M., Anthony Robinson, and Aimee Byrne. "A Dynamic-Based Methodology for Optimising Insulation Retrofit to Reduce Total Carbon." In Creating a Roadmap Towards Circularity in the Built Environment, 71–82. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-45980-1_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractOptimisation is frequently mentioned in frameworks and assessments of design for a circular economy. Adopting circular economy principles in building retrofit can reduce the use of materials and minimise emissions embedded in building materials alongside reduced operational emissions. This paper presents the optimisation of retrofitted insulation thickness, using Ireland as a case study. Detailed and robust dynamic finite element models were developed based on in-situ boundary conditions and combined with economic and environmental considerations. It was determined that optimising insulation based on cost was considerably different to optimising based on carbon. Cost-optimised insulation can reduce overall cost which could expand the reach of retrofit, allowing for more existing homes to be used more efficiently. However, this approach can lead to significant increases in operational carbon and therefore a balanced decision must be made. The methodology presented can be adopted for different regions by inputting local data, which will facilitate the adoption of circular economy principals in European retrofit plans. The approach can benefit developing circular insulation materials of low embodied carbon by building a case for their use.
5

Zeng, Tongyan, Essam F. Abo-Serie, Manus Henry, and James Jewkes. "Thermal Optimisation Model for Cooling Channel Design Using the Adjoint Method in 3D Printed Aluminium Die-Casting Tools." In Springer Proceedings in Energy, 333–40. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30960-1_31.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractIn the present study, the adjoint method is introduced to the optimisation of the corner cooling element in two baseline cooling designs for a mould cavity, as examples of the Aluminium metal die-casting process. First, a steady thermal model simulating the Aluminium die-casting process is introduced for the two-corner cooling design scenario. This steady model serves as the first iteration of the optimised model using the adjoint method. A dual-parameter objective function targets the interfacial temperature standard deviation and pressure drop across the internal cooling region. For both design cases, multi-iterative deformation cycles of the corner cooling configurations result in optimised designs with non-uniform cross-section geometries and smooth surface finishing. Numerical simulations of the resulting designs show improvements in uniform cooling across the mould/cast interfacial contact surface by 66.13% and 92.65%, while the optimised pressure drop increases coolant fluid flow by 25.81% and 20.35% respectively. This technique has been applied to optimise the complex cooling system for an industrial high-pressure aluminium die-casting (HPADC) tool (Zeng et al. in SAE Technical Paper 2022-01-0246, 2022, [1]). Production line experience demonstrates that the optimised designs have three times the operational life compared to conventional mould designs, providing a significant reduction in manufacturing and operation costs.
6

Acharyya, Madhu, and Ahmed Abdullah Abdelkarrim Abdullah Zarroug. "Optimisation of Insurance Capital and Reserve Using Catastrophe Bonds." In Contemporary Financial Management, 377–415. Institute for Local Self-Government Maribor, 2023. http://dx.doi.org/10.4335/2023.3.20.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The study proposed a model to optimise the size of catastrophe bonds within firms’ capital structure and minimize the cost of capital within the scope of Insurative Model proposed by Shimpi (2001, 2002). To do so, a linear optimisation model has been developed, considering the Solvency 2 ratio as a constraint. The linear optimisation model suggests two mixes of the capital structures, one with a size of CAT-BOND 1.24% and the other 55.34% of the capital. In addition, the study concluded to the optimum allocation of CAT-BOND adds value to insurance companies.
7

Anye Cho, Bovinille. "Surrogate and Multiscale Modelling for (Bio)reactor Scale-up and Visualisation." In Machine Learning and Hybrid Modelling for Reaction Engineering, 275–302. Royal Society of Chemistry, 2023. http://dx.doi.org/10.1039/bk9781837670178-00277.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Bioresource production in bioreactors presents a sustainable biotechnology for tackling the ever-increasing energy and mass demands of the world’s surging population. To attain commercial viability, reaction engineers must efficiently design and upscale these bioreactors for the industrial production of high value biochemicals, fuels, and materials. These engineers utilise computational fluid dynamics (CFD) to visualise bioreactor fluid flow and optimise dead zones with poor mixing, leading to promising bioreactor configurations. An advanced route, yet to be widely deployed, is the integration of bioreaction kinetics within the CFD framework for multiscale optimisation and upscaling. To demonstrate its potential, a two-step coupling strategy of CFD hydrodynamics to light transmission and bioreaction transport was comprehensively demonstrated herein for photobioreactors (PBRs) of different configurations and scales. The problem of prohibitively high computational cost of simulating long lasting fermentation experiments was addressed with a recently published accelerated growth kinetics strategy. To further cut the simulation cost stemming from the computationally expensive objective evaluation during multiscale CFD optimisation, a Gaussian process model was trained as a surrogate of the expensive multiscale CFD model and utilised within a Bayesian optimisation (BO) framework. BO suggested a near-optimal static mixer configuration for a flat plate PBR yielding over a 95.3% increase in biomass concentration compared to the baseline without static mixers. This robust and sample efficient optimisation strategy provides enormous cost savings and presents a step forward towards the efficient design, optimisation, and upscaling of bioreactors.
8

Sharma, Prerna, and Bhim Singh. "Optimization of Various Costs in Inventory Management using Neural Networks." In Advanced Mathematical Applications in Data Science, 92–104. BENTHAM SCIENCE PUBLISHERS, 2023. http://dx.doi.org/10.2174/9789815124842123010009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The process of maintaining the right quantity of inventory to meet demand, minimising logistics costs, and avoiding frequent inventory problems, including reserve outs, overstocking, and backorders is known as inventory optimisation. One has a finite capacity and is referred to as an owned warehouse (OW), which is located near the market, while the other has an endless capacity and is referred to as a rented warehouse (RW), which is located away from the market. Here, lowering the overall cost is the goal. Neural networks are employed in these works to either maximise or minimise cost. The findings produced from the neural networks are compared with a mathematical model and neural networks. Findings indicate that neural networks outperformed both conventional mathematical models and neural networks in terms of optimising the outcomes. The best way to understand supervised machine learning algorithms like neural networks is in the context of function approximation. The benefits and drawbacks of the two-warehouse approach compared to the single warehouse plan will also be covered. We investigate cost optimisation using neural networks in this chapter, and the outcomes will also be compared using the same.
9

Pandey, Binay Kumar, Mukundan Appadurai Paramashivan, Rashmi Mahajan, Darshan A. Mahajan, Nitesh Behare, Gadee Gowwrii, and Sabyasachi Pramanik. "Applications of Artificial Intelligence and Machine Learning in Supply Chain Management." In Advances in Logistics, Operations, and Management Science, 74–90. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-1347-3.ch006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study examines theoretical frameworks, methods, and key discoveries' practical, social, and theoretical impacts. Study envisions AI/ML SCM. Supply chain AI/ML is assessed by model optimisation, predictive analytics, and decision-making frameworks. Systematic literature reviews gather pertinent research. The authors read academic, conference, and industrial papers to learn. To uncover trends and insights, studies are critically assessed, classified, and synthesised. This study examines AI/ML supply chain demand forecasting, inventory management, logistics optimisation, and risk management. The research suggests deep learning, neural networks, evolutionary algorithms, and SVMs for supply chain issues. The review highlights SCM AI/ML implementation issues and AI/ML supply chain management pros and drawbacks. Researchers, practitioners, and policymakers may discover how AI and ML improve supply chain efficiency, cost, and networks.
10

Shboul, Bashar, Ismail Al-Arfi, Stavros Michailos, Derek Ingham, Godfrey T. Udeh, Lin Ma, Kevin Hughes, and Mohamed Pourkashanian. "Multi-Objective Optimal Performance of a Hybrid CPSD-SE/HWT System for Microgrid Power Generation." In Applications of Nature-Inspired Computing in Renewable Energy Systems, 166–210. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-8561-0.ch009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A new integrated hybrid solar thermal and wind-based microgrid power system is proposed. It consists of a concentrated parabolic solar dish Stirling engine, a wind turbine, and a battery bank. The electrical power curtailment is diminished, and the levelised cost of energy is significantly reduced. To achieve these goals, the present study conducts a dynamic performance analysis over one year of operation. Further, a multi-objective optimisation model based on a genetic algorithm is implemented to optimise the techno-economic performance. The MATLAB/Simulink® software was used to model the system, study the performance under various operating conditions, and optimise the proposed hybrid system. Finally, the model has been implemented for a specific case study in Mafraq, Jordan. The system satisfies a net power output of 1500 kWe. The developed model has been validated using published results. In conclusion, the obtained results reveal that the optimised model of the microgrid can substantially improve the overall efficiency and reduce the levelised cost of electricity.

Тези доповідей конференцій з теми "Cost model optimisation":

1

Hidayat, Azis, Noke Fajar Prakoso, Ahmad Sujai, and PT Medco. "Production and Cost Optimization in a Complex Onshore Operation Using Integrated Production Model." In SPE Symposium: Production Enhancement and Cost Optimisation. Society of Petroleum Engineers, 2017. http://dx.doi.org/10.2118/189223-ms.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Jin, X., Hui Zhang, Guoqing Yin, Xingjie Han, and D. Zhu. "Applying the Integrated 3D Acid Fracturing Model Using a New Workflow in a Field Case Study." In SPE Symposium: Production Enhancement and Cost Optimisation. Society of Petroleum Engineers, 2017. http://dx.doi.org/10.2118/189217-ms.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Harrington, Cian, N. Vaughan, Jeffrey Allen, Benajmin David Smither, and Gavin Farmer. "Low Cost Hybrid Motorcycle Optimisation Model." In Small Engine Technology Conference & Exposition. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2010. http://dx.doi.org/10.4271/2010-32-0131.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Eke, Emmanuel, Ibiye Iyalla, Jesse Andrawus, and Radhakrishna Prabhu. "Optimisation of Offshore Structures Decommissioning – Cost Considerations." In SPE Nigeria Annual International Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/207206-ms.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract The petroleum industry is currently being faced with a growing number of ageing offshore platforms that are no longer in use and require to be decommissioned. Offshore decommissioning is a complex venture, and such projects are expected to cost the industry billions of dollars in the next two decades. Early knowledge of decommissioning cost is important to platform owners who bear the asset retirement obligation. However, obtaining the cost estimate for decommissioning an offshore platform is a challenging task that requires extensive structural and economic studies. This is further complicated by the existence of several decommissioning options such as complete and partial removal. In this paper, project costs for decommissioning 23 offshore platforms under three different scenarios are estimated using information from a publicly available source which only specified the costs of completely removing the platforms. A novel mathematical model for predicting the decommissioning cost for a platform based on its features is developed. The development included curve-fitting with the aid of generalised reduced gradient tool in Excel® Solver and a training dataset. The developed model predicted, with a very high degree of accuracy, platform decommissioning costs for four (4) different options under the Pacific Outer Continental Shelf conditions. Model performance was evaluated by calculating the Mean Absolute Percentage Error of predictions using a test dataset. This yielded a value of about 6%, implying a 94% chance of correctly predicting decommissioning cost.
5

Shan, Jianan, Bu Yu, Shaonan Feng, and Haoli Chen. "A Cost model for Battery Chargers over Energy Optimisation and Social Results." In Proceedings of the 4th International Conference on Economic Management and Big Data Applications, ICEMBDA 2023, October 27–29, 2023, Tianjin, China. EAI, 2024. http://dx.doi.org/10.4108/eai.27-10-2023.2341988.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Xie, Zheng, Liangxiu Han, and Richard Baldock. "Augmented Petri Net Cost Model for Optimisation of Large Bioinformatics Workflows Using Cloud." In 2013 European Modelling Symposium (EMS). IEEE, 2013. http://dx.doi.org/10.1109/ems.2013.35.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Castagne, Sylvie, Richard Curran, Alan Rothwell, and Adrian Murphy. "Development of a Precise Manufacturing Cost Model for the Optimisation of Aircraft Structures." In AIAA 5th ATIO and16th Lighter-Than-Air Sys Tech. and Balloon Systems Conferences. Reston, Virigina: American Institute of Aeronautics and Astronautics, 2005. http://dx.doi.org/10.2514/6.2005-7438.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Eleftheriadis, Stathis, Philippe Duffour, Paul Greening, Jess James, and Dejan Mumovic. "Multilevel Computational Model for Cost and Carbon Optimisation of Reinforced Concrete Floor Systems." In 34th International Symposium on Automation and Robotics in Construction. Tribun EU, s.r.o., Brno, 2017. http://dx.doi.org/10.22260/isarc2017/0042.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Coffey, Tiarnan, Christopher Rai, John Greene, and Stephen O’Brien Bromley. "Subsea Spare Parts Analysis Optimisation." In ASME 2019 38th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/omae2019-96100.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract The main objective of this paper is to present a fully quantitative methodology combining reliability, availability and maintainability (RAM) analysis and cost-benefit analysis (CBA) approaches to determine the optimum sparing strategy for subsea components considering reliability data, lead times, availability and cost. This methodology can be utilized at any stage of an asset lifecycle, from design to operation and can be adjusted to reflect modifications throughout the life of field. Using commercially available RAM analysis software, Maros [2], a reliability block diagram (RBD) is constructed to represent the reliability structure and logic of the system being analyzed. Retrievable components, for which spares would be suitable, are then identified within the model to review and update the failure modes and reliability information for each component. Reliability information can be based on project specific data or from industry-wide sources such as OREDA. The RAM analysis software uses the Monte-Carlo simulation technique to determine availability. A sensitivity analysis is then performed to determine maximum availability while holding the minimum required stock level of spare components. A sparing priority factor (SPF) analysis is then performed in addition to the RAM sensitivity analysis to support those results and consider spare purchase, storage and preservation costs. The SPF gives a weighting to the storage cost against the potential impact on production. The SPF is a number used to determine a component’s need to have a spare. A high SPF indicates an increased requirement to hold a spare.
10

Padula, S., J. Harou, and L. Papageprgiou. "Balancing water resource supply and demand - a minimum cost optimisation model for infrastructure systems." In BHS 3rd International Conference. British Hydrological Society, 2010. http://dx.doi.org/10.7558/bhs.2010.ic133.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Cost model optimisation":

1

Fent, Thomas, Stefan Wrzaczek, Gustav Feichtinger, and Andreas Novak. Fertility decline and age-structure in China and India. Verlag der Österreichischen Akademie der Wissenschaften, May 2024. http://dx.doi.org/10.1553/0x003f0d14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
China and India, two Asian countries that experienced a rapid decline in fertility since the middle of the twentieth century, are the focus of this paper. Although there is no doubt that lower fertility levels have many positive effects on the economy, development and sustainability, little is known about the optimal transition from high to medium or even low levels of fertility. Firstly, implementing policies that have the potential to reduce fertility is costly. Secondly, additional costs arise from adapting the infrastructure to a population that fluctuates quickly not only in terms of size but also with respect to the age structure. We apply an intertemporal optimisation model that takes the costs and benefits of fertility decline into account. The optimal time path depends on the cost structure, the planning horizon and the initial conditions. In the case of a long planning horizon and high initial fertility, it may even be optimal to reduce fertility temporarily below replacement level in order to slow down population growth at an early stage. A key finding of our formal investigation is that, under the same plausible parameter settings, the optimal paths for China and India differ substantially. Moreover, our analysis shows that India, where the fertility decline emerged as a consequence of societal and economic developments, followed a path closer to the optimal fertility transition than China, where the fertility decline was state-imposed. The mathematical approach deployed for this analysis provides insights into the optimal long-term development of fertility and allows for policy conclusions to be drawn for other countries that are still in the fertility transition process.

До бібліографії