Dissertations / Theses on the topic 'Optimization Benchmarking'

To see the other types of publications on this topic, follow the link: Optimization Benchmarking.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 32 dissertations / theses for your research on the topic 'Optimization Benchmarking.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Samuelsson, Oscar. "Benchmarking Global Optimization Algorithms for Core Prediction Identification." Thesis, Linköpings universitet, Reglerteknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-61253.

Full text
Abstract:
Mathematical modeling has evolved from being a rare event to becoming a standardapproach for investigating complex biological interactions. However, variationsand uncertainties in experimental data usually result in uncertain estimatesof the parameters of the model. It is possible to draw conclusions from the modeldespite uncertain parameters by using core predictions. A core prediction is amodel property which is valid for all parameter vectors that fit data at an acceptablecost. By validating the core prediction with additional experimentalmeasurements one can draw conclusions about the overall model despite uncertainparameter values. A prerequisite for identifying a core prediction is a global searchfor all acceptable parameter vectors. Global optimization methods are normallyconstructed to search for a single optimal parameter vector, but methods searchingfor several acceptable parameter vectors are required here.In this thesis, two metaheuristic optimization algorithms have been evaluated,namely Simulated annealing and Scatter search. In order to compare their differences,a set of functions has been implemented in Matlab. The Matlab functionsinclude a statistical framework which is used to discard poorly tuned optimizationalgorithms, five performance measures reflecting the different objectives of locatingone or several acceptable parameter vectors, and a number of test functionsmeant to reflect high-dimensional, multimodal problems. In addition to the testfunctions, a biological benchmark model is included.The statistical framework has been used to evaluate the performance of thetwo algorithms with the objective of locating one and several acceptable parametervectors. For the objective of locating one acceptable parameter vector, theresults indicate that Scatter search performed better than Simulated Annealing.The results also indicate that different search objectives require differently tunedalgorithms. Furthermore, the results show that test functions with a suitabledegree of difficulty are not a trivial task to obtain. A verification of the tuned optimizationalgorithms has been conducted on the benchmark model. The resultsare somewhat contradicting and in this specific case, it is not possible to claimthat good configurations on test functions remain good in real applications.
APA, Harvard, Vancouver, ISO, and other styles
2

Ait, Elhara Ouassim. "Stochastic Black-Box Optimization and Benchmarking in Large Dimensions." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS211/document.

Full text
Abstract:
Etant donné le coût élevé qui accompagne, en général, la résolution de problème en grandes dimensions, notamment quand il s'agit de problèmes réels; le recours à des fonctions dite benchmarks et une approche communément utilisée pour l'évaluation d'algorithmes avec un coût minime. Il est alors question de savoir identifier les formes par lesquelles ces problèmes se présentent pour pouvoir les reproduire dans ces benchmarks. Une question dont la réponse est difficile vu la variété de ces problèmes, leur complexité, et la difficulté de tous les décrire pertinemment. L'idée est alors d'examiner les difficultés qui accompagnent généralement ces problème, ceci afin de les reproduire dans les fonctions benchmarks et évaluer la capacité des algorithmes à les résoudre. Dans le cas des problèmes de grandes dimensions, il serait pratique de pouvoir simplement étendre les benchmarks déjà utilisés pour les dimensions moins importantes. Cependant, il est important de prendre en compte les contraintes additionnelles qui accompagnent les problèmes de grandes dimensions, notamment ceux liés à la complexité d'évaluer ces fonctions benchmark. Idéalement, les fonctions benchmark en grandes dimension garderaient la majorité des propriétés de leurs contreparties en dimensions réduite tout en ayant un coût raisonnable. Les problèmes benchmark sont souvent classifiés en catégories suivant les difficultés qu'ils présentent. Même dans un scénario en boîte-noire où ce genre d'information n'est pas partagée avec l'algorithme, il reste important et pertinent d'avoir cette classification. Ceci permet d'identifier les lacunes d'un algorithme vis à vis d'une difficulté en particulier, et donc de plus facilement pouvoir l'améliorer. Une autre question importante à se poser en modélisant des problèmes de grandes dimensions est la pertinence des variables. En effet, quand la dimension est relativement petite, il n'est pas rare de voir toutes les variables contribuer à définir la qualité d'une solution. Cependant, quand la dimension grandit, il arrive souvent que des variables deviennent redondantes voire inutiles; notamment vu la difficulté de trouver une représentation minimaliste du problème. Ce dernier point encourage la conception et d'algorithmes et de fonctions benchmark traitant cette classe de problèmes. Dans cette thèse, on répond, principalement, à trois questions rencontrées dans l'optimisation stochastique continue en grandes dimensions : 1. Comment concevoir une méthode d'adaptation du pas d'une stratégie d'évolution qui, à la fois, est efficace et a un coût en calculs raisonnable ? 2. Comment construire et généraliser des fonctions à faible dimension effective ? 3. Comment étendre un ensemble de fonctions benchmarks pour des cas de grandes dimensions en préservant leurs propriétés sans avoir des caractéristiques qui soient exploitables ?
Because of the generally high computational costs that come with large-scale problems, more so on real world problems, the use of benchmarks is a common practice in algorithm design, algorithm tuning or algorithm choice/evaluation. The question is then the forms in which these real-world problems come. Answering this question is generally hard due to the variety of these problems and the tediousness of describing each of them. Instead, one can investigate the commonly encountered difficulties when solving continuous optimization problems. Once the difficulties identified, one can construct relevant benchmark functions that reproduce these difficulties and allow assessing the ability of algorithms to solve them. In the case of large-scale benchmarking, it would be natural and convenient to build on the work that was already done on smaller dimensions, and be able to extend it to larger ones. When doing so, we must take into account the added constraints that come with a large-scale scenario. We need to be able to reproduce, as much as possible, the effects and properties of any part of the benchmark that needs to be replaced or adapted for large-scales. This is done in order for the new benchmarks to remain relevant. It is common to classify the problems, and thus the benchmarks, according to the difficulties they present and properties they possess. It is true that in a black-box scenario, such information (difficulties, properties...) is supposed unknown to the algorithm. However, in a benchmarking setting, this classification becomes important and allows to better identify and understand the shortcomings of a method, and thus make it easier to improve it or alternatively to switch to a more efficient one (one needs to make sure the algorithms are exploiting this knowledge when solving the problems). Thus the importance of identifying the difficulties and properties of the problems of a benchmarking suite and, in our case, preserving them. One other question that rises particularly when dealing with large-scale problems is the relevance of the decision variables. In a small dimension problem, it is common to have all variable contribute a fair amount to the fitness value of the solution or, at least, to be in a scenario where all variables need to be optimized in order to reach high quality solutions. This is however not always the case in large-scales; with the increasing number of variables, some of them become redundant or groups of variables can be replaced with smaller groups since it is then increasingly difficult to find a minimalistic representation of a problem. This minimalistic representation is sometimes not even desired, for example when it makes the resulting problem more complex and the trade-off with the increase in number of variables is not favorable, or larger numbers of variables and different representations of the same features within a same problem allow a better exploration. This encourages the design of both algorithms and benchmarks for this class of problems, especially if such algorithms can take advantage of the low effective dimensionality of the problems, or, in a complete black-box scenario, cost little to test for it (low effective dimension) and optimize assuming a small effective dimension. In this thesis, we address three questions that generally arise in stochastic continuous black-box optimization and benchmarking in high dimensions: 1. How to design cheap and yet efficient step-size adaptation mechanism for evolution strategies? 2. How to construct and generalize low effective dimension problems? 3. How to extend a low/medium dimension benchmark to large dimensions while remaining computationally reasonable, non-trivial and preserving the properties of the original problem?
APA, Harvard, Vancouver, ISO, and other styles
3

Bendahmane, El Hachemi. "Introduction de fonctionnalités d'auto-optimisation dans une architecture de selfbenchmarking." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00782233.

Full text
Abstract:
Le Benchmarking des systèmes client-serveur implique des infrastructures techniques réparties complexes, dont la gestion nécessite une approche autonomique. Cette gestion s'appuie sur une suite d'étapes, observation, analyse et rétroaction, qui correspond au principe d'une boucle de contrôle autonome. Des travaux antérieurs dans le domaine du test de performances ont montré comment introduire des fonctionnalités de test autonome par le biais d'une injection de charge auto-régulée. L'objectif de cette thèse est de suivre cette démarche de calcul autonome (autonomic computing) en y introduisant des fonctionnalités d'optimisation autonome. On peut ainsi obtenir automatiquement des résultats de benchmarks fiables et comparables, mettant en oeuvre l'ensemble des étapes de self-benchmarking. Notre contribution est double. D'une part, nous proposons un algorithme original pour l'optimisation dans un contexte de test de performance, qui vise à diminuer le nombre de solutions potentielles à tester, moyennant une hypothèse sur la forme de la fonction qui lie la valeur des paramètres à la performance mesurée. Cet algorithme est indépendant du système à optimiser. Il manipule des paramètres entiers, dont les valeurs sont comprises dans un intervalle donné, avec une granularité de valeur donnée. D'autre part, nous montrons une approche architecturale à composants et une organisation du benchmark automatique en plusieurs boucles de contrôle autonomes (détection de saturation, injection de charge, calcul d'optimisation), coordonnées de manière faiblement couplée via un mode de communication asynchrone de type publication-souscription. Complétant un canevas logiciel à composants pour l'injection de charge auto-régulée, nous y ajoutons des composants pour reparamétrer et redémarrer automatiquement le système à optimiser.Deux séries d'expérimentations ont été menées pour valider notre dispositif d'auto-optimisation. La première série concerne une application web de type achat en ligne, déployée sur un serveur d'application JavaEE. La seconde série concerne une application à trois tiers effectifs (WEB, métier (EJB JOnAS) et base de données) clusterSample. Les trois tiers sont sur des machines physiques distinctes.
APA, Harvard, Vancouver, ISO, and other styles
4

Yilmaz, Eftun. "Benchmarking of Optimization Modules for Two Wind Farm Design Software Tools." Thesis, Högskolan på Gotland, Institutionen för kultur, energi och miljö, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hgo:diva-1946.

Full text
Abstract:
Optimization of wind farm layout is an expensive and complex task involving several engineering challenges. The layout of any wind farm directly impacts profitability and return of investment. Several software optimization modules in line with wind farm design tools in industry is currently attempting to place the turbines in locations with good wind resources while adhering to the constraints of a defined objective function. Assessment of these software tools needs to be performed clearly for assessing different tools in wind farm layout design process. However, there is still not a clear demonstration of benchmarking and comparison of these software tools even for simple test cases. This work compares two different optimization software namely openWind and WindPRO commercial software tools mutually.
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Xi. "Benchmark generation in a new framework /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?IELM%202007%20LI.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Goldberg, Benjamin. "Benchmarking Traffic Control Algorithms on a Packet Switched Network." Scholarship @ Claremont, 2015. http://scholarship.claremont.edu/cmc_theses/1192.

Full text
Abstract:
Traffic congestion has tremendous economic and environmental costs. One way to reduce this congestion is to implement more intelligent traffic light systems. There is significant existing research into different algorithms for controlling traffic lights, but they all use separate systems for performance testing. This paper presents the Rush Hour system, which models a network of roadways and traffic lights as a network of connected routers and endnodes. Several traffic switching algorithms are then tested on the Rush Hour system. As expected, we found that the more intelligent systems were effective at reducing congestion at low and medium levels of traffic. However, they were comparable to more naive algorithms at higher levels of traffic.
APA, Harvard, Vancouver, ISO, and other styles
7

Randau, Simon [Verfasser]. "Benchmarking of SSB, reference cells and optimization of the cathode composite / Simon Randau." Gieߟen : Universitätsbibliothek, 2021. http://d-nb.info/1236385675/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kumar, Vachan. "Modeling and optimization approaches for benchmarking emerging on-chip and off-chip interconnect technologies." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/54280.

Full text
Abstract:
Modeling approaches are developed to optimize emerging on-chip and off-chip electrical interconnect technologies and benchmark them against conventional technologies. While transistor scaling results in an improvement in power and performance, interconnect scaling results in a degradation in performance and electromigration reliability. Although graphene potentially has superior transport properties compared to copper, it is shown that several technology improvements like smooth edges, edge doping, good contacts, and good substrates are essential for graphene to outperform copper in high performance on-chip interconnect applications. However, for low power applications, the low capacitance of graphene results in 31\% energy savings compared to copper interconnects, for a fixed performance. Further, for characterization of the circuit parameters of multi-layer graphene, multi-conductor transmission line models that account for an alignment margin and finite width of the contact are developed. Although it is essential to push for an improvement in chip performance by improving on-chip interconnects, devices, and architectures, the system level performance can get severely limited by the bandwidth of off-chip interconnects. As a result, three dimensional integration and airgap interconnects are studied as potential replacements for conventional off-chip interconnects. The key parameters that limit the performance of a 3D IC are identified as the Through Silicon Via (TSV) capacitance, driver resistance, and on-chip wire resistance on the driver side. Further, the impact of on-chip wires on the performance of 3D ICs is shown to be more pronounced at advanced technology nodes and when the TSV diameter is scaled down. Airgap interconnects are shown to improve aggregate bandwidth by 3x to 5x for backplane and Printed Circuit Board (PCB) links, and by 2x for silicon interposer links, at comparable energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
9

Schütze, Lars, and Jeronimo Castrillon. "Analyzing State-of-the-Art Role-based Programming Languages." ACM, 2017. https://tud.qucosa.de/id/qucosa%3A73196.

Full text
Abstract:
With ubiquitous computing, autonomous cars, and cyber-physical systems (CPS), adaptive software becomes more and more important as computing is increasingly context-dependent. Role-based programming has been proposed to enable adaptive software design without the problem of scattering the context-dependent code. Adaptation is achieved by having objects play roles during runtime. With every role, the object's behavior is modified to adapt to the given context. In recent years, many role-based programming languages have been developed. While they greatly differ in the set of supported features, they all incur in large runtime overheads, resulting in inferior performance. The increased variability and expressiveness of the programming languages have a direct impact on the run-time and memory consumption. In this paper we provide a detailed analysis of state-of-the-art role-based programming languages, with emphasis on performance bottlenecks. We also provide insight on how to overcome these problems.
APA, Harvard, Vancouver, ISO, and other styles
10

Bjäreholt, Johan. "RISC-V Compiler Performance:A Comparison between GCC and LLVM/clang." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14659.

Full text
Abstract:
RISC-V is a new open-source instruction set architecture (ISA) that in De-cember 2016 manufactured its rst mass-produced processors. It focuses onboth eciency and performance and diers from other open-source architec-tures by not having a copyleft license permitting vendors to freely design,manufacture and sell RISC-V chips without any fees nor having to sharetheir modications on the reference implementations of the architecture.The goal of this thesis is to evaluate the performance of the GCC andLLVM/clang compilers support for the RISC-V target and their ability tooptimize for the architecture. The performance will be evaluated from ex-ecuting the CoreMark and Dhrystone benchmarks are both popular indus-try standard programs for evaluating performance on embedded processors.They will be run on both the GCC and LLVM/clang compilers on dierentoptimization levels and compared in performance per clock to the ARM archi-tecture which is mature yet rather similar to RISC-V. The compiler supportfor the RISC-V target is still in development and the focus of this thesis willbe the current performance dierences between the GCC and LLVM com-pilers on this architecture. The platform we will execute the benchmarks onwil be the Freedom E310 processor on the SiFive HiFive1 board for RISC-Vand a ARM Cortex-M4 processor by Freescale on the Teensy 3.6 board. TheFreedom E310 is almost identical to the reference Berkeley Rocket RISC-Vdesign and the ARM Coretex-M4 processor has a similar clock speed and isaimed at a similar target audience.The results presented that the -O2 and -O3 optimization levels on GCCfor RISC-V performed very well in comparison to our ARM reference. Onthe lower -O1 optimization level and -O0 which is no optimizations and -Oswhich is -O0 with optimizations for generating a smaller executable code sizeGCC performs much worse than ARM at 46% of the performance at -O1,8.2% at -Os and 9.3% at -O0 on the CoreMark benchmark with similar resultsin Dhrystone except on -O1 where it performed as well as ARM. When turn-ing o optimizations (-O0) GCC for RISC-V was 9.2% of the performanceon ARM in CoreMark and 11% in Dhrystone which was unexpected andneeds further investigation. LLVM/clang on the other hand crashed whentrying to compile our CoreMark benchmark and on Dhrystone the optimiza-tion options made a very minor impact on performance making it 6.0% theperformance of GCC on -O3 and 5.6% of the performance of ARM on -O3, soeven with optimizations it was still slower than GCC without optimizations.In conclusion the performance of RISC-V with the GCC compiler onthe higher optimization levels performs very well considering how young theRISC-V architecture is. It does seems like there could be room for improvement on the lower optimization levels however which in turn could also pos-sibly increase the performance of the higher optimization levels. With theLLVM/clang compiler on the other hand a lot of work needs to be done tomake it competetive in both performance and stability with the GCC com-piler and other architectures. Why the -O0 optimization is so considerablyslower on RISC-V than on ARM was also very unexpected and needs furtherinvestigation.
APA, Harvard, Vancouver, ISO, and other styles
11

Ramos, Calderón Antonio José. "Computational and accuracy benchmarking of simulation and system-theoretic models for production systems engineering." Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-19877.

Full text
Abstract:
The modern industry has an increasing demand for simulation software able to help workers and decision-makers visualize the outputs of a specific process in a fast, accurate way. In this report, a comparative study between FACTS (Factory Analyses in ConcepTual phase using Simulation), Plant Simulation, and PSE (Production System Engineering) Toolbox is done regarding their capacity to simulate models with increasing complexity, how accurate they are in their outputs with different optimized buffer allocations, and how well they perform on the task of detecting the bottlenecks of a process. Benchmarking simulation software requires an experimental approach, and for gathering and organizing all the data generated using external programs like MATLAB, C, Excel, and R are used. A high level of automatization is required as otherwise the manual input of data would take too long to be effective.The results conclude on major concordances among FACTS and Plant Simulation as the most used commercial DES (Discrete Event Simulation) software and a more mathematical-theoretical approach coming from PSE Toolbox. The optimization done in the report links to sustainability, with an enhanced TH improving the ecological, social and economic aspects, and to Lean philosophy using lean buffers that smooth and improve the production flow.
APA, Harvard, Vancouver, ISO, and other styles
12

Conte, Francesco. "A general purpose algorithm for a class of vehicle routing problems: A benchmark analysis on literature instances." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
The aim of this work is to analyse the computational performances of a general-purpose heuristic capable of solving a large class of Vehicle Routing Problems. The general Ruin and Recreate (R&R) framework of the algorithm is discussed, with particular attention on some of the removal and insertion operators used in the implementation. The benchmark analysis is then performed using standard benchmark instances of three different routing problems. Before analyzing the algorithm, a definition of the problem is provided together with some mathematical formulations. Exact methods are briefly discussed, whereas an exhaustive presentation of the most famous (meta)heuristic approaches is given. The obtained results show that the algorithm returns good solutions for almost all the different problem variants with up to 500 customers.
APA, Harvard, Vancouver, ISO, and other styles
13

Nam, Le Thanh. "Stochastic Optimization Methods for Infrastructure Management with Incomplete Monitoring Data." 京都大学 (Kyoto University), 2009. http://hdl.handle.net/2433/85384.

Full text
Abstract:
Kyoto University (京都大学)
0048
新制・課程博士
博士(工学)
甲第14919号
工博第3146号
新制||工||1472(附属図書館)
27357
UT51-2009-M833
京都大学大学院工学研究科都市社会工学専攻
(主査)教授 小林 潔司, 教授 大津 宏康, 教授 河野 広隆
学位規則第4条第1項該当
APA, Harvard, Vancouver, ISO, and other styles
14

Rajendran, Ajith, and Gautham Asokan. "Real Time Monitoring of Machining Process and Data Gathering for Digital Twin Optimization." Thesis, KTH, Industriell produktion, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301594.

Full text
Abstract:
In the development stages of a Digital twin of production assets, especially machine tools, real time process monitoring and data gathering proves to be vital. Having a monitoring system that monitors and updates the operators or managers in real time, helps improve productivity in terms of reducing downtime through predictive/preventive analytics and by incorporating in process quality assessment capabilities. When it comes to Real time monitoring of machine tools andprocesses, sensor technologies have proven to be the most effective and widely researched. Years of research and development have paved the way for many smart sensor technologies that come both in built with the machine tools as well as external applications. However, these technologies prove to be expensive and complicated to implement especially for Small and Medium Enterprises. This thesis focuses on evaluating and testing a simple, cost-efficient monitoring system using inexpensive sensor technologies that would help optimize an existing Digital twin setup for machine tools for Small and Medium Enterprises. Experiments with a 5 axis CNC machine tool using different tools and varying operating parameters, materials were performed,and the relevant sensor data were collected, mapped, analysed for accuracy and benchmarking. The thesis also evaluates the integration of this data with the information already collected from other sources, improve existing data reliability, and provides guidelines on this could be transformed usefully to create more value to SME’s.
APA, Harvard, Vancouver, ISO, and other styles
15

Nascimento, Andreas. "Mathematical modeling for drilling optimization in pre-salt sections : a focus on south Atlantic ocean operations /." Guaratinguetá, 2016. http://hdl.handle.net/11449/136182.

Full text
Abstract:
Orientador: Mauro Hugo Mathias
Coorientador: Gerhard Thonhauser
Banca: Edson Cocchieri Botelho
Banca: João Andrade de Carvalho Junior
Banca: José Luis Gonçalves
Banca: Behzad Elahifar
Abstract: Pre-salt basins and their exploration have become more and more frequently mentioned over the years, not just for their potential reserves, but also for the implicit challenges in terms of operations to face in order to make these fields commercially viable. Several research efforts aimed at addressing these related barriers, in which drilling optimization and efficiency are presented as a considerably complex area. The problematic is concentrated in the low drillability and in the high cost involved when drilling the pre-salt carbonates. The outcome of this research is based in studies performed on top of eight pre-salt wells, addressing drilling operational time savings referenced by benchmarks and drilling mechanics parameters choosiness. The studies were based on simulations performed with penetration rate (ROP) modeling combined with specific energy (SE). The Bourgoyne Jr. and Young Jr. (1974) ROP model was used given the high errors presented for the other models, higher than 40% and, in terms of SE, the formulations from Teale (1965) and Pessier et al. (1992) were used. All these classic literature are still present in the industry and the software Oracle Crystal Ball was used as a supportive tool for the simulations. This research yielded four important results: 1) the polycrystalline diamond compact (PDC) is the most suitable drill-bit choice for pre-salt, presenting the lowest teeth-cutters wear rate, 0.28 [%/ m]; 2) the possible spare in operational time encountered for the pre-salt operations represent a saving of approximately 13,747,550.00 [USD] for the analyzed pre-salt wells; 3) the final mathematical model developed, after the adjustments for pre-salt, foresee an improvement dropping the relative error from 36.52% to 23.12% in terms of comparing the calculated and modeled ROP with the field measured ROP... (Complete abstract click electronic access below)
Resumo: As bacias do pré-sal e sua exploração se tornaram cada vez mais mencionadas ao longo dos anos, não apenas por seu potencial de reservatório, mas também devido aos grandes desafios implícitos em termos de operações a serem enfrentados para tornar estes campos comercialmente viáveis. Várias pesquisas vêm sendo desenvolvidas visando contornar estas barreiras, das quais a otimização e eficiência de perfuração se apresentam como uma área consideravelmente complexa. A problemática se concentra nas baixas taxas de penetração e no alto custo envolvido ao se perfurar as seções dos carbonatos do pré-sal. Os resultados da pesquisa apresentados nesta tese baseiam-se em análises com oito poços do pré-sal, abordando economia de tempo operacional com base em análises referenciadas em benchmarks e escolhas de parâmetros mecânicos de perfuração. Os estudos foram baseados em simulações realizadas com modelagem de taxa de penetração (ROP) combinadas com energia específica (SE). Utilizou-se o modelo de ROP de Bourgoyne Jr.e Young Jr. (1974) face aos altos erros apresentados pelos outros modelos, superiores a 40% e, em termos de SE, utilizouse o equacionamento de Teale (1965) e Pessier et al. (1992). Todas estas literaturas classicas ainda estão presentes na indústria e o software Oracle Crystal Ball foi utilizado como uma ferramenta de apoio para as simulações. Os resultados deste trabalho mostraram quatro conclusões importantes: 1) a broca de perfuração do tipo polycrystalline diamond compact (PDC) é a mais adequada para o pré-sal, apresentando uma taxa de desgaste de dentes-cortadores de 0.28 [%/ m]; 2) a possível diminuição de tempo de operação encontrada após análises de performance de operação pode resultar em uma economia de aproximadamente 13,747,550.00 [USD] para os poços do pré-sal analisados... (Resumo completo, clicar acesso eletrônico abaixo)
Doutor
APA, Harvard, Vancouver, ISO, and other styles
16

Dočekal, Petr. "Optimalizace procesu nákupu ve společnosti ŠKODA AUTO a.s." Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-201999.

Full text
Abstract:
This diploma thesis deals with the optimization of the procurement process at ŠKODA AUTO a.s. Specifically with Forward Sourcing, which is responsible for the nomination of suppliers for new parts of starting projects. Optimization will be done within a benchmark analogical to identical process in Volkswagen AG. The aim is to create proposals for changes and recommendations for an effective functioning of Forward Sourcing. The theoretical part will be defined in terms of process management and procurement within the organization. The practical part focuses on the analysis, comparison and measurement process from which proposals for changes and recommendations will be determined.
APA, Harvard, Vancouver, ISO, and other styles
17

Lövgren, Sebastian, and Emil Norberg. "Topology Optimization of Vehicle Body Structure for Improved Ride & Handling." Thesis, Linköpings universitet, Maskinkonstruktion, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-71009.

Full text
Abstract:
Ride and handling are important areas for safety and improved vehicle control during driving. To meet the demands on ride and handling a number of measures can be taken. This master thesis work has focused on the early design phase. At the early phases of design, the level of details is low and the design freedom is big. By introducing a tool to support the early vehicle body design, the potential of finding more efficient structures increases. In this study, topology optimization of a vehicle front structure has been performed using OptiStruct by Altair Engineering. The objective has been to find the optimal topology of beams and rods to achieve high stiffness of the front structure for improved ride and handling. Based on topology optimization a proposal for a beam layout in the front structure area has been identified. A vital part of the project has been to describe how to use topology optimization as a tool in the design process. During the project different approaches has been studied to come from a large design space to a low weight architecture based on a beam-like structure. The different approaches will be described and our experience and recommendations will be presented. Also the general result of a topology-optimized architecture for vehicle body stiffness will be presented.
APA, Harvard, Vancouver, ISO, and other styles
18

Ceyhan, Ahmet. "Interconnects for future technology generations - conventional CMOS with copper/low-k and beyond." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53080.

Full text
Abstract:
The limitations of the conventional Cu/low-k interconnect technology for use in future ultra-scaled integrated circuits down to 7 nm in the year 2020 are investigated from the power/performance point of view. Compact models are used to demonstrate the impacts of various interconnect process parameters, for instance, the interconnect barrier/liner bilayer thickness and aspect ratio, on the design and optimization of a multilevel interconnect network. A framework to perform a sensitivity analysis for the circuit behavior to interconnect process parameters is created for future FinFET CMOS technology nodes. Multiple predictive cell libraries down to the 7‒nm technology node are constructed to enable early investigation of the electronic chip performance using commercial electronic design automation (EDA) tools with real chip information. Findings indicated new opportunities that arise for emerging novel interconnect technologies from the materials and process perspectives. These opportunities are evaluated based on potential benefits that are quantified with rigorous circuit-level simulations and requirements for key parameters are underlined. The impacts of various emerging interconnect technologies on the performances of emerging devices are analyzed to quantify the realistic circuit- and system-level benefits that these new switches can offer.
APA, Harvard, Vancouver, ISO, and other styles
19

Ait, Ouassarah Azhar. "ADI : A NoSQL system for bi-temporal databases." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI046/document.

Full text
Abstract:
La complexité et la dynamique de l'environnement dans lequel évolue chaque entreprise requiert de la part de ses managers la capacité de prendre des décisions pertinentes dans un laps de temps très court afin de s'accroître. Pour cela, l'analyse des données générées par l'activité de l'entreprise peut être une précieuse source d'information. L'Intelligence Opérationnelle (IO) est une classe de systèmes d'aide à la décision permettant aux managers d'avoir une très bonne compréhension de la situation de l'entreprise, à travers l'analyse de l'activité passée et présente. Dans ce contexte, les notions de temps et de traçabilité sont primordiales dans la compréhension de l'évolution de l'activité de l'entreprise à travers le temps. Dans cette thèse, nous présentons Axway Decision Insight (ADI), une solution d'IO développée par Axway. Son composant clé est un SGBD orienté-colonnes et bi-temporel développé en interne par l'entreprise pour répondre aux besoins spécifiques de l'IO. Ses capacités bi-temporelles lui permettent de gérer nativement aussi bien l'évolution des données dans la réalité modélisée (temps de validité) que l'évolution des données dans la base de données (temps de transaction). Nous commencerons par présenter la solution ADI en nous focalisant sur deux éléments importants: 1) l'interface graphique qui permet la conception et l'utilisation d'ADI sans écrire la moindre ligne de code. 2) L'approche adoptée pour modéliser les données bi-temporelles. Ensuite, nous présenterons un benchmark bi-temporel destiné ADI.Après cela, nous présenterons deux optimisations pour ADI. La première permet de pré-calculer et matérialiser les opérations d'agrégation, ce qui permet de réduire le temps nécessaire à la mise à jour de interface graphique d'ADI. La deuxième optimisation ordonne l'exécution des opérateurs de jointure des plans de requêtes en utilisant un modèle coût basé sur des statistiques sur des données bi-temporelles. Pour ces optimisations, nous avons effectué des expérimentations en utilisant notre benchmark, et qui ont démontré leurs intérêts
Nowadays, every company is operating in very dynamic and complex environments which require from its managers to have a deep understanding of its business in order to take rapid and relevant decisions, and thus maintain or improve their company's activities. They can rely on analyzing the data deluge generated by the company's activities. A new class of systems has emerged in the decision support system galaxy called "Operational Intelligence" (OI) to meet this challenge. The objective is to enable operational managers to understand what happened in the past as well as what is currently happening in their business. In this context, the notions of time and traceability turns out to play a crucial role to understand what happened in the company and what is currently happening in the company. In this thesis, we present "Axway Decision Insight" (ADI), an "Operational Intelligence" solution developed by Axway. ADI's key component is a proprietary bi-temporal and column-oriented DBMS that has specially been designed to meet OI requirements. Its bi-temporal capabilities enable to catch both data evolution in the modeled reality (valid time) and in the database (transaction time).We first introduce ADI by focusing on two topics: 1) the GUI that makes the platform "code-free". 2) The adopted bi-temporal modeling approaches. Then we propose a performance benchmark that meets ADI's requirements. Next, we present two bi-temporal query optimizations for ADI. The first one consists in redefining a complex bi-temporal query into: 1) a set of continuous queries in charge of computing aggregation operations as data is collected. 2) A bi-temporal query that accesses the continuous queries' results and feeds the GUI. The second one is a cost-based optimization that uses statistics on bi-temporal data to determine an "optimal" query plan. For these two optimizations, we conducted some experiments, using our benchmark, which show their interests
APA, Harvard, Vancouver, ISO, and other styles
20

Mesmoudi, Amin. "Declarative parallel query processing on large scale astronomical databases." Thesis, Lyon 1, 2015. http://www.theses.fr/2015LYO10326.

Full text
Abstract:
Les travaux de cette thèse s'inscrivent dans le cadre du projet Petasky. Notre objectif est de proposer des outils permettant de gérer des dizaines de Peta-octets de données issues d'observations astronomiques. Nos travaux se focalisent essentiellement sur la conception des nouveaux systèmes permettant de garantir le passage à l'échelle. Dans cette thèse, nos contributions concernent trois aspects : Benchmarking des systèmes existants, conception d'un nouveau système et optimisation du système. Nous avons commencé par analyser la capacité des systèmes fondés sur le modèle MapReduce et supportant SQL à gérer les données LSST et leurs capacités d'optimisation de certains types de requêtes. Nous avons pu constater qu'il n'y a pas de technique « magique » pour partitionner, stocker et indexer les données mais l'efficacité des techniques dédiées dépend essentiellement du type de requête et de la typologie des données considérées. Suite à notre travail de Benchmarking, nous avons retenu quelques techniques qui doivent être intégrées dans un système de gestion de données à large échelle. Nous avons conçu un nouveau système de façon à garantir la capacité dudit système à supporter plusieurs mécanismes de partitionnement et plusieurs opérateurs d'évaluation. Nous avons utilisé BSP (Bulk Synchronous Parallel) comme modèle de calcul. Les données sont représentées logiquement par des graphes. L'évaluation des requêtes est donc faite en explorant le graphe de données en utilisant les arcs entrants et les arcs sortants. Les premières expérimentations ont montré que notre approche permet une amélioration significative des performances par rapport aux systèmes Map/Reduce
This work is carried out in framework of the PetaSky project. The objective of this project is to provide a set of tools allowing to manage Peta-bytes of data from astronomical observations. Our work is concerned with the design of a scalable approach. We first started by analyzing the ability of MapReduce based systems and supporting SQL to manage the LSST data and ensure optimization capabilities for certain types of queries. We analyzed the impact of data partitioning, indexing and compression on query performance. From our experiments, it follows that there is no “magic” technique to partition, store and index data but the efficiency of dedicated techniques depends mainly on the type of queries and the typology of data that are considered. Based on our work on benchmarking, we identified some techniques to be integrated to large-scale data management systems. We designed a new system allowing to support multiple partitioning mechanisms and several evaluation operators. We used the BSP (Bulk Synchronous Parallel) model as a parallel computation paradigm. Unlike MapeReduce model, we send intermediate results to workers that can continue their processing. Data is logically represented as a graph. The evaluation of queries is performed by exploring the data graph using forward and backward edges. We also offer a semi-automatic partitioning approach, i.e., we provide the system administrator with a set of tools allowing her/him to choose the manner of partitioning data using the schema of the database and domain knowledge. The first experiments show that our approach provides a significant performance improvement with respect to Map/Reduce systems
APA, Harvard, Vancouver, ISO, and other styles
21

Mainuš, Matěj. "Demonstrace a proměření "next-gen" grafických API." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255354.

Full text
Abstract:
The goal of master’s thesis was to demonstrate and benchmark peformance of Mantle and Vulkan APIs with different optimization methods. This thesis proposes a rendering toolkit with optimization methods based on parallel command buffer generating, persistent staging buffers mapping, minimal pipeline configuration and descriptor sets changing, device memory pre-allocating with managing and sharing between multiple resources. The result is reference implementation that could render dynamic scene with thousands of objects in real time.
APA, Harvard, Vancouver, ISO, and other styles
22

Spejchal, Luděk. "Optimalizace sourcingu v konkrétní frimě." Master's thesis, Vysoká škola ekonomická v Praze, 2009. http://www.nusl.cz/ntk/nusl-10357.

Full text
Abstract:
The aim of this diploma thesis is a detailed view of a process and way of managing all the sourcing activities in the company Thermo Fischer Scientific. The focus (of my work) is a czech site of mentioned company situated in Mukařov, which, apart from production itself, is engaged also in sourcing for its own needs (where in most cases it has a decision power in its own hands) as well as for the whole company in the global measure (where it operates just as an intermediate searching for possible business partners, without any powers of decision). The main contribution of my work lies in the comparison of both of these variations and detailed investigation and description of all the activities that could help to optimize the global sourcing. I found out that entire optimizations are based on communication improvements among all the sourcing managers and technologists within the global corporation. On the basis of this fact I have projected models of documents the sourcing managers should work with, suppliers database and an information server. The result of my work is a process scheme which should contribute to the outsourced projects optimization.
APA, Harvard, Vancouver, ISO, and other styles
23

MALAGO', Anna. "A systematic approach for calibrating and validating the agro-hydrological SWAT model for policy support and decision making in large European River Basins." Doctoral thesis, Università degli studi di Ferrara, 2016. http://hdl.handle.net/11392/2403465.

Full text
Abstract:
Questa tesi descrive l’attività di ricerca svolta durante i tre anni di dottorato (2013-2015) in Scienze dell’ Ingegneria, nel ramo di studio di Ingegneria Civile e Ambientale. Il lavoro di ricerca si è focalizzato sullo sviluppo di un approccio modellistico sistematico per calibrare e validare (C/V; Calibrazione/Validazione) il modello agro-idrologico SWAT per simulare realisticamente tutti i processi di quantità e qualità delle acque in grandi bacini idrografici in Europa (ovvero deflusso superficiale, flusso laterale, baseflow, erosione e sedimentazione, la crescita delle piante, ciclo dei nutrienti/destino/trasporto, denitrificazione e fenomeni carsici). Questa ricerca nasce dalla necessità di fornire robuste valutazioni ai fini di una corretta gestione e per il supporto alle decisioni politiche e normative. Molte innovazioni sono state introdotte nell’approccio modellistico sia per migliorare la struttura che la procedura di calibrazione. Prima di tutto, sono state fatte modifiche al modello SWAT per produrre nuovi e utili outputs per la calibrazione ed interpretazione di specifici processi. Nuovi algoritmi per il calcolo del parametro lunghezza di versante e del fattore LS sono stati testati e validati, cosi come modifiche alla equazione MUSLE. Inoltre, i processi carsici sono stati rappresentati usando il modello KSWAT (qui sviluppato), una combinazione del modello SWAT con un modello di flusso carsico. Per quanto riguarda la calibrazione/validazione, un approccio basato sui processi è stato sviluppato utilizzando sia “hard data” (lunghe serie temporali di dati in molteplici punti di monitoraggio) che “soft data” (informazioni da letteratura di uno specifico processo all’interno del bilancio idrologico, dei sedimenti o dei nutrienti che possono non essere direttamente misurate nell’area di studio, per esempio stima della media annuale di denitrificazione) per un triplice obiettivo: simulare bene le osservazioni, capire i processi all’interno di un bacino e fornire accurate analisi di scenario di costi-benefici per raggiungere gli obiettivi delle principali Direttive Europee. L’approccio modellistico sistematico qui proposto coinvolge diversi aspetti: una strategia di calibrazione e validazione che considera i processi sia relativi alla quantità (portata e sue componenti) che alla qualità (sedimenti e nutrienti); studio di dettaglio per rappresentare i processi idrologici in differenti zone climatiche, cosi come in aree con morfologie carsiche dominanti; validazione delle componenti del bilancio idrologico usando l’approccio di Budyko; la definizione di un setup di modello basato su un'analisi di sensibilità degli attributi topografici derivati da Modelli Digitali del Terreno (DEMs) con diverse risoluzioni; una inter-comparazione dei risultati di diversi modelli (approccio di 2 benchmarking) e la definizione di misure economicamente efficaci per le implementazioni delle migliori pratiche di gestione. Cinque casi studio, e dunque 5 modelli SWAT, che ricoprono circa il 55% d’Europa, sono stati realizzati per spiegare questi argomenti. La Penisola Iberica (556,000 km2) e la Scandinavia (1,000,000 km2) sono state scelte per testare la metodologia di C/V in differenti aree climatiche, mentre il Bacino del Danubio (800,000 km2), cosi come l’Upper Danubio (132,000 km2), sono stati considerati come strategiche, estese-socioeconomiche and eterogenee aree per studiare i principali temi della procedura attraverso la valutazione idrologica e qualitativa. L’isola di Creta invece è stata selezionata per rappresentare fenomeni carsici dal momento che ne è ricoperta piu’del 40%. L’analisi di queste applicazioni modellistiche con SWAT ha mostrato che la metodologica di C/V permette di ottenere buone performance statistiche e buone conoscenze di ogni processo idrologico attraverso l’analisi delle variazioni temporali e spaziali della portata calibrata in regioni diverse ed estese, caratterizzate da eterogenee caratteristiche quali topografia, uso del suolo, tipi di suoli e diversi regimi climatici. Inoltre, l’analisi delle principali componenti del bilancio idrologico (evapotraspirazione e deflusso di base) utilizzando l’approccio Budyko ha messo in evidenza le difficoltà del modello SWAT di predire correttamente il deflusso di base in bacini montuosi regolati, sottolineando la dipendenza della procedura di calibrazione dal numero e dalla distribuzione spaziale delle stazioni di monitoraggio e dall’impatto antropogenico di stoccaggio dell’acqua e diversioni. Si è osservato inoltre che la portata a grande scala non è influenzata né dalla risoluzione del DEM (sia di 25 m e di 100 m) né dagli attributi topografici derivati (per esempio pendenza e lunghezza di versante). Al contrario, le componenti della portata (superficiale, subsuperficiale e sotterranea) sono influenzate dal calcolato del parametro lunghezza di versante basato sul DEM, sottolineando la necessità di migliorare l’attuale algoritmo di SWAT per una migliore rappresentazione delle componenti della portata, cosi come i sedimenti tramite l’equazione “Modified Universal Soil Loss Equation (MUSLE)”. Questa equazione è stata modificata per ridurre la sensibilità dei sedimenti dalle dimensioni della Unità di Risposta Idrologica (HRU) e dal fattore pendenza –lunghezza (LS) ottenerlo robuste simulazioni di concentrazioni e carichi di sedimenti, come anche robusti bilanci in grandi bacini fluviali. E’stato inoltre dimostrato che se opportunamente adattato SWAT è in grado di simulare i processi carsici e le sue intrinseche caratteristiche (per esempio la veloce di infiltrazione nell’acquifero profondo, il movimento dell’acqua nei condotti carsici attraverso sottobacini non ideologicamente connessi in superfici, e il ritorno di flusso nei canali tramite sorgenti 3 carsiche, incrementando così la robustezza del bilancio idrologico in numerosi bacini fluviali in Europa influenzati dalle risorse di acqua carsiche. Per quanto riguarda la qualità delle acque (sedimenti e nutrienti), è stato dimostrato che solo pochi parametri erano sensibili alla calibrazione, aumentando così la difficoltà di rappresentare la variazione spaziale di alcuni processi a grande scala, come la denitrificazione e il trasporto di sedimenti nei fiumi. Comunque, la variazione mensile dell’azoto e fosforo totale sono stati ben simulati in molteplici stazioni di monitoraggio dando un controllo sostanziale dell’inquinamento come direttamente richiesto dalle Direttive Europee (come per esempio la Direttiva Acque Potabili, 98/83/EC). L’inter-comparazione dei carichi dei nutrienti tra diversi modelli ha inoltre confermato l’abilità di SWAT di simulare comparabili carichi di nutrienti in grandi bacini fluviali, sebbene si è evidenziata la necessità di incrementare le osservazioni. Infine, lo strumento di ottimizzazione multi-obiettivo per la gestione delle migliori pratiche di gestione (BMPs) è stato riconosciuto come un valido strumento per identificare efficienti scenari, per esempio correlati alla riduzione di fertilizzanti minerali e al miglioramento di efficienza di depurazione dei depuratori (WWTPs), fornendo una significativa riduzione delle concentrazioni con il miglior costo-beneficio. Questi risultati possono essere viste anche come utili raccomandazioni per i utenti/modellisti che usano SWAT. In conclusione, l’approccio sistematico proposto per la calibrazione/validazione del modello SWAT ha mostrato di essere pedagogico e un potente strumento per scienziati, per politici e anche stakeholders, e può essere esteso ad altri modelli quali-quantitativi che hanno una struttura simile a quella di SWAT.
This thesis describes the research I conducted during a three-year doctoral program (2013-2015) in Engineering Science, in the branch study of Civil and Environmental Engineering. During this period, I focused on the development of a systematic modeling approach for calibrating and validating the agro-hydrological SWAT model for realistically simulating all critical hydrological and water quantity processes in large River Basins in Europe (i.e. surface runoff, lateral flow, baseflow, erosion and sedimentation, plant growth, nutrients cycle/fate/transport, denitrification and karst phenomena). This research stems from the need to provide robust and suitable model assessment for making sound management, policy and regulatory decisions. Several innovations were introduced in the modeling approach aimed both to improve model structure and calibration procedure. First of all, modifications of SWAT model were applied to produce new useful outputs for calibration and interpretation of specific processes. New algorithms for the calculation of hillslope length parameter and LS factor were also proposed and tested, as well as a new MUSLE equation. Furthermore, karst processes were represented using the KSWAT model, a combination of SWAT with a karst-flow model. Concerning the calibration/validation, a process-based approach was developed involving both hard (i.e. long time series in multiple gauging stations) and soft data (i.e. literature information of a specific process within a water, sediment, or nutrient balance that may not be directly measured within the study area, e.g. average annual estimate of denitrification) for a threefold objective: to match well the observations, to understand the processes within a basin and to provide accurate cost-benefit scenarios analysis for achieving the goals of the main European Directives. The proposed systematic modeling approach consists on different aspects: the definition of a process-based calibration and validation (C/V) strategy for quantity (streamflow and its components) and quality aspects (sediment and nutrients); detailed study for representing hydrological processes at different climate regimes and in karst dominant morphologies; validation of water balance components using a Budyko framework approach; the inter-model-comparison of outputs 2 (Benchmarking approach); the definition of a suitable model setup based on a sensitivity analysis of derived topographic attributes from different Digital Elevation Model (DEM) pixel size; the definition of cost-effective measures for the Best Management Practices (BMPs) implementation. Five SWAT model case studies were used to illustrate these topics covering approximately 55% of Europe Union. The Iberian (556,000 km2) and the Scandinavia (1,000,000 km2) Peninsulas were selected to test the C/V strategy in different climate regimes, while the Danube River Basin (800,000 km2), as well as the Upper Danube (132,000 km2), were considered as strategic largesocioeconomic-heterogeneous areas for investigating the main key topics of the procedure through water quantity and quality assessment. The Crete Island (8,400 km2) was instead selected as representative for karst phenomena assessment, as it is covered more than 40% by karst features. The analysis of these SWAT model applications has shown that the processbased C/V strategy is able to obtain good performance statistics and to gain good knowledge of each hydrological process through the analysis of temporal and spatial variations of calibrated streamflow in different large regions, characterized by heterogeneous spatial topography, land uses, soils and different climate regimes. Furthermore, the analysis of the main components of the water balance (evapotranspiration and baseflow) via Budyko framework highlighted the difficulties of SWAT model to predict correctly the baseflow in regulated mountainous basins and the dependence of the procedure on the number and spatial distribution of gauging stations and on anthropogenic water storage impact, as well as the water diversions. It was also observed that the predicted streamflow at large-scale is not affected by DEM pixel size (both with 25 m and 100 m DEM pixel size) and SWAT topographic attributes (e.g. slope and hillslope length). Conversely, the streamflow components resulted markedly affected by the change of the hillslope length parameter calculation based on DEM pixel size, highlighting the need to improve the current SWAT algorithm for a better representation of the streamflow components, as well as sediment yields via Modified Universal Soil Loss Equation (MUSLE). This equation was modified to reduce the sensitivity of sediment yields to the Hydrologic Response Units (HRUs) and slope-length factor (LS) obtaining robust simulation of sediment concentrations, yields and suitable budgets in large River Basins. Furthermore, it was demonstrated that SWAT is 3 able to reproduce the karst processes when opportunely adapted to reproduce the karst features and their intrinsic characteristics (such as fast infiltration in deep groundwater, movement of water in the karst conduits across subbasins not hydrologically connected, and the return of water as springs discharges in the rivers), thus increasing the reliability of water balance prediction in numerous river basins in Europe affected by karst water resources. As regards the water quality (sediment and nutrients), it was observed that only few watershed parameters were sensitive to calibration, increasing the difficult to represent the spatial variation of some processes in large-areas, such as the denitrification and sediment transport in the river. However, the monthly seasonal variation of total nitrogen and phosphorous concentrations were well reproduced at multi-gauging stations, given a substantial control of pollution as directly request by the European Directives (i.e. Drinking Water Directive, 98/83/EC). Furthermore, the inter-model comparisons of nutrient loads confirmed the ability of SWAT model to predict comparable nutrient loads in large–river basins, albeit the need to collect more environmental data emerged. Finally, the proposed multi-objective optimization tool for BMPs implementation in SWAT was recognized as a very useful tool in identifying efficient scenarios, related to reduction of mineral fertilization and Waste Water Treatment Plants (WWTPs) upgrading, providing significantly nutrients concentration reduction with the best cost–effectiveness. These findings can be also summarised as several useful recommendations for SWAT modellers. In conclusion, the proposed systematic approach for C/V procedure with SWAT has shown to be pedagogic and a powerful tool both for scientists, policy makers and also stakeholders, and could be extended to other hydrological and water quality models with similar structure as SWAT.
APA, Harvard, Vancouver, ISO, and other styles
24

Салавеліс, Д. Є., Д. Е. Салавелис, and D. E. Salavelis. "Формування складових конкурентоспроможності потенціалу підприємства." Diss., Одеський національний економічний університет, 2019. http://dspace.oneu.edu.ua/jspui/handle/123456789/11142.

Full text
Abstract:
У дисертації вирішена наукова задача щодо вдосконалення теоретико- методичних підходів до формування конкурентоспроможності потенціалу підприємства та розроблено на цій основі рекомендації по її підвищенню. Об'єктом дослідження є процеси управління конкурентоспроможністю потенціалу підприємства з метою її підвищення в ринкових умовах. Предметом дослідження є сукупність теоретико-методичних підходів і прикладних основ аналізу конкурентоспроможності потенціалу підприємства з метою її підвищення та формування конкурентних переваг Сформульовано сутність поняття конкурентоспроможності потенціалу підприємства. Систематизовано теоретичні підходи до формування складових конкурентоспроможності потенціалу підприємства та визначено методичний інструментарій аналізу. Проаналізовано стан ринку бетону в умовах конкуренції і сформульовано концептуальний підхід до оцінки конкурентоспроможності потенціалу респондентів дослідження. У дисертації застосовано технологію пакету «STATISTICA» і виконано рішення задачі оптимального розподілу потенціалу підприємства. Виконано аналіз індексів конкурентоспроможності. Аналіз конкурентоспроможності досліджуваних підприємств дозволив визначити вектор їх розвитку
В диссертации решена научная задача по совершенствованию теоретико- методических подходов к формированию конкурентоспособности потенциала предприятия и на этой основе разработаны рекомендации по ее повышению. Объектом исследования являются процессы управления конкурентоспособностью потенциала предприятия с целью ее повышения в рыночных условиях. Предметом исследования является совокупность теоретико-методических подходов и прикладных основ анализа конкурентоспособности потенциала предприятия с целью ее повышения и формирования конкурентных преимуществ. Сформулирована сущность понятия конкурентоспособности потенциала предприятия. Систематизированы теоретические подходы к формированию составляющих конкурентоспособности потенциала предприятия и определены методический инструментарий анализа. Проанализировано состояние рынка бетона в условиях конкуренции и сформулирован концептуальный подход к оценке конкурентоспособности потенциала респондентов исследования. В диссертации применена технология пакета «STATISTICA» и выполнено решение задачи оптимального распределения потенциала предприятия. Выполнен анализ индексов конкурентоспособности. Анализ конкурентоспособности исследуемых предприятий позволил определить вектор их развития
The dissertation deals with the scientific problem of improving the theoretical and methodological bases regarding to the definition and realization of component formation of competitiveness in enterprise potential. The object of research is the processes of managing the competitiveness of the enterprise potential in order to increase it in market conditions. The subject of research is a set of theoretical and methodological approaches and engineering bases of the analysis of competitiveness of the potential in enterprise with the purpose to increase and formate competitive advantages. Theoretical approaches to determine and realize the competitiveness of the enterprise potential are the following: the essence of the concept of competitiveness of the enterprise potential is formulated and such a unified definition is proposed, which characterizes its ability to withstand competition in development strategies. The methodical toolkit for competitive analysis of the enterprise potential is determined. The essence of theoretical approaches to the analysis of the competitiveness of enterprise potential, which are consistent with the industry specificity of concrete producers are formulated taking into account the range of components of their competitiveness. The importance of concrete producers in the economy of Ukraine is emphasized and a conceptual scheme of potential competitive analysis is developed within the framework of their functioning in the marketeconomic and institutional environment. The dissertation identifies domestic producers of concrete-respondents of research (LLC «Comfort-LV», LLC «Hi-Raise Constructions Holding», Odessa Branch of LLC with AI «Dyckerhoff (Ukraine)», PP «Construction industry», LLC «West», LLC «Element»), which hold a significant market share in Odessa and Odessa region, control 92.8% of the concrete market segment according to 2015-2017. Both the conceptual approach to the assessment of the competitiveness of the respondents’ research potential was applied and the systematic analytical procedures in the system of the concept of the enterprise ACP (Assessment of Competitive Potential) have been applied. «STATISTICA» technology has been applied. The graphical capabilities of the package allowed the author to determine basic, optimistic, pessimistic scenarios of finding competitiveness reserves as a result of computer interpretation of interval forecast of enterprise potential. The author has systematized the mechanism of forecasting; a time horizon that ensures forecast accuracy; the scenarios to find reserves for improving the level of competitiveness are analyzed; indicators of competitiveness are predicted. For concrete manufacturers, a new scheme with object-oriented base of forecast analysis has been chosen to increase the competitiveness ,which unlike the existing schemes of analysis, will not only allow to investigate dynamics of indicators but also the forecast line (forecast) and its boundary values, optimistic (upper) and pessimistic (lower) forecasts at the level of 90% of reliability. In the dissertation the mathematical solution to the problem of optimal distribution of the potential enterprise has been formed with the use of the optimization process. The algorithm for solving the problem has been appealed to analyze the optimal values of competitive indices of enterprise potential in the form of graphs. Comparison of competitiveness indices before and after the optimization process was performed, which showed the dynamics of enterprise potential development. The analysis of the competitiveness of these enterprises made it possible to determine the vector of development and economic effect in terms of component potentials. In the dissertation the methodical basis of introduction of consulting support of benchmarking in potential enterprise is formed. The result of research through consulting support of benchmarking is the formation, improvement or change of the strategy of increasing the competitiveness of the enterprise potential. This conceptual approach involves solving the problem of optimizing the enterprise potential and consulting support of benchmarking, was appealed to formulate a business decision on construction of a concrete production plant at the enterprise of Odessa Branch of LLC with AI «Dyckerhoff (Ukraine)». Measures developed to approve the results of the implementation of the concept on the system of ACP of the enterprise made it possible to obtain an economic effect of the amount of UAH 3173,31 thousand, tat's to say to increase the market share by 3%; to increase the competitive indices of Odessa Branch of LLC with AI «Dyckerhoff (Ukraine)» through the realization of reserves by increasing the potential competitiveness in terms of marketing by 51.43%, organizational by 37.29%, organizational by 37.29%. Thus, production capacity reserves have decreased significantly. Significance of the concept of enterprise OKP shows the obtained economic effect.
APA, Harvard, Vancouver, ISO, and other styles
25

Saulich, Sven. "Generic design and investigation of solar cooling systems." Thesis, Loughborough University, 2013. https://dspace.lboro.ac.uk/2134/13627.

Full text
Abstract:
This thesis presents work on a holistic approach for improving the overall design of solar cooling systems driven by solar thermal collectors. Newly developed methods for thermodynamic optimization of hydraulics and control were used to redesign an existing pilot plant. Measurements taken from the newly developed system show an 81% increase of the Solar Cooling Efficiency (SCEth) factor compared to the original pilot system. In addition to the improvements in system design, new efficiency factors for benchmarking solar cooling systems are presented. The Solar Supply Efficiency (SSEth) factor provides a means of quantifying the quality of solar thermal charging systems relative to the usable heat to drive the sorption process. The product of the SSEth with the already established COPth of the chiller, leads to the SCEth factor which, for the first time, provides a clear and concise benchmarking method for the overall design of solar cooling systems. Furthermore, the definition of a coefficient of performance, including irreversibilities from energy conversion (COPcon), enables a direct comparison of compression and sorption chiller technology. This new performance metric is applicable to all low-temperature heat-supply machines for direct comparison of different types or technologies. The achieved findings of this work led to an optimized generic design for solar cooling systems, which was successfully transferred to the market.
APA, Harvard, Vancouver, ISO, and other styles
26

"Benchmarking iterative optimization algorithms." Tulane University, 2020.

Find full text
Abstract:
archives@tulane.edu
Choosing which numerical optimization algorithm will perform best on a given problem is a task that researchers often face. Optimization benchmarking experiments allow researchers to compare the performance of different algorithms on various problems and thus provide insights into which algorithms should be used for a given problem. We benchmarked the prototypical iterative optimization algorithms, gradient descent, and the BFGS algorithm on a suite of test problems using the COCO benchmarking software. Our results indicate that the performance of gradient descent and BFGS varies by dimension, problem class, and solution accuracy. We provide recommendations for improving algorithm accuracy while reducing computational cost based on the implications of our results.
1
Elliot Hill
APA, Harvard, Vancouver, ISO, and other styles
27

LALONDE, NICOLAS. "Multiobjective Optimization Algorithm Benchmarking and Design Under Parameter Uncertainty." Thesis, 2009. http://hdl.handle.net/1974/2586.

Full text
Abstract:
This research aims to improve our understanding of multiobjective optimization, by comparing the performance of five multiobjective optimization algorithms, and by proposing a new formulation to consider input uncertainty in multiobjective optimization problems. Four deterministic multiobjective optimization algorithms and one probabilistic algorithm were compared: the Weighted Sum, the Adaptive Weighted Sum, the Normal Constraint, the Normal Boundary Intersection methods, and the Nondominated Sorting Genetic Algorithm-II (NSGA-II). The algorithms were compared using six test problems, which included a wide range of optimization problem types (bounded vs. unbounded, constrained vs. unconstrained). Performance metrics used for quantitative comparison were the total run (CPU) time, number of function evaluations, variance in solution distribution, and numbers of dominated and non-optimal solutions. Graphical representations of the resulting Pareto fronts were also presented. No single method outperformed the others for all performance metrics, and the two different classes of algorithms were effective for different types of problems. NSGA-II did not effectively solve problems involving unbounded design variables or equality constraints. On the other hand, the deterministic algorithms could not solve a problem with a non-continuous objective function. In the second phase of this research, design under uncertainty was considered in multiobjective optimization. The effects of input uncertainty on a Pareto front were quantitatively investigated by developing a multiobjective robust optimization framework. Two possible effects on a Pareto front were identified: a shift away from the Utopia point, and a shrinking of the Pareto curve. A set of Pareto fronts were obtained in which the optimum solutions have different levels of insensitivity or robustness. Four test problems were used to examine the Pareto front change. Increasing the insensitivity requirement of the objective function with regard to input variations moved the Pareto front away from the Utopia point or reduced the length of the Pareto front. These changes were quantified, and the effects of changing robustness requirements were discussed. The approach would provide designers with not only the choice of optimal solutions on a Pareto front in traditional multiobjective optimization, but also an additional choice of a suitable Pareto front according to the acceptable level of performance variation.
Thesis (Master, Mechanical and Materials Engineering) -- Queen's University, 2009-08-10 21:59:13.795
APA, Harvard, Vancouver, ISO, and other styles
28

Kandoor, Arun Kumar. "Algorithms and Benchmarking for Virtual Network Mapping." 2011. https://scholarworks.umass.edu/theses/560.

Full text
Abstract:
Network virtualization has become a primary enabler to solve the internet ossi- fication problem. It allows to run multiple architectures or protocols on a shared physical infrastructure. One of the important aspects of network virtualization is to have a virtual network (VN) mapping technique which uses the substrate resources efficiently. Currently, there exists very few VN mapping techniques and there is no common evaluation strategy which can test these algorithms effectively. In this thesis, we advocate the need for such a tool and develop it by considering a wide spectrum of parameters and simulation scenarios. We also provide various performance metrics and do a comparison study of the existing algorithms. Based on the comparative study, we point out the positives and negatives of the existing mapping algorithms and propose a new LP formulation based on Hub location approach that efficiently allocates substrate resources to the virtual network requests. Our results show that our algorithm does better in terms of number of successful network mappings and average time to map while balancing load on the network.
APA, Harvard, Vancouver, ISO, and other styles
29

SALA, RAMSES. "Towards efficient multidisciplinary design optimization for car body structures." Doctoral thesis, 2016. http://hdl.handle.net/2158/1042892.

Full text
Abstract:
The aim of the here presented research activity is to contribute to the identification and development of efficient strategies for multidisciplinary design optimization of vehicle structures involving, crashworthiness, vibro-acoustic and lightweight design criteria. The literature survey at the start of this activity, identified: that although a large variety of optimization strategies and methods are described in the literature, only few comparisons or guidelines are available for the selection of efficient optimization algorithms or methods for vehicle optimization related problems, involving the mentioned design criteria. In this work, several state of the art optimization algorithms for multidisciplinary design optimization have been selected and are systematically compared for their efficiency on applications that typically occur within a car body design optimization context. Although these comparisons mainly involved existing methods, the resulting comparisons on the industrially relevant application of vehicle design related optimization problems extended the currently available literature. The results could serve as initial guidelines for practitioners in industry and as a starting point for further research. In the optimization literature, there are many test functions/problems available that are commonly used for comparative assessments of optimization algorithms. These test problems are however difficult to relate to industrially relevant problems and vice versa. A novel Representative Surrogate Problem approach is developed in the scope of this work, which could be summarized as a strategy to construct optimization test problems, based on response characteristics of real-world problems. The approach is presented and investigated for its application to car body design problems. Inspired by the response characterization strategies and results, a novel test function generation method based on the composition of random fields is presented. The resulting method is a contribution to the field op global optimization in general and not restricted to automotive applications. This method enables the construction of synthetic optimization problems with various interesting function features. Due to the parameterized nature of the method, these test functions enable structured investigations on the influence of particular problem features on the performance of optimization algorithms. Compared to existing test functions the method provides a further step towards systematic problem feature orientated performance analysis of meta-heuristic optimization methods, which contributes to the analysis, selection and development of optimization algorithms for non-convex optimization problems. The overall results of the performed comparisons and case studies with the developed methods showed that significant gains in optimization efficiency can be achieved by selecting suitable optimization algorithms, and tuned parameter settings for optimization problem formulations relevant to car body design. The comparison results, stressed the need to take into account optimization efficiency, whereas in many case studies in the literature, optimization algorithms are selected without proper justification. The presented results and methods are relevant for practitioners in industry and for further research on the development of optimization algorithms for complex problems.
APA, Harvard, Vancouver, ISO, and other styles
30

Dymond, Antoine Smith Dryden. "Tuning optimization algorithms under multiple objective function evaluation budgets." Thesis, 2014. http://hdl.handle.net/2263/43554.

Full text
Abstract:
The performance of optimization algorithms is sensitive to both the optimization problem's numerical characteristics and the termination criteria of the algorithm. Given these considerations two tuning algorithms named tMOPSO and MOTA are proposed to assist optimization practitioners to nd algorithm settings which are approximate for the problem at hand. For a speci ed problem tMOPSO aims to determine multiple groups of control parameter values, each of which results in optimal performance at a di erent objective function evaluation budget. To achieve this, the control parameter tuning problem is formulated as a multi-objective optimization problem. Furthermore, tMOPSO uses a noise-handling strategy and control parameter value assessment procedure, which are specialized for tuning stochastic optimization algorithms. The principles upon which tMOPSO were designed are expanded into the context of many objective optimization, to create the MOTA tuning algorithm. MOTA tunes an optimization algorithm to multiple problems over a range of objective function evaluation budgets. To optimize the resulting many objective tuning problem, MOTA makes use of bi-objective decomposition. The last section of work entails an application of the tMOPSO and MOTA algorithms to benchmark optimization algorithms according to their tunability. Benchmarking via tunability is shown to be an effective approach for comparing optimization algorithms, where the various control parameter choices available to an optimization practitioner are included into the benchmarking process.
Thesis (PhD)--University of Pretoria, 2014
gm2015
Mechanical and Aeronautical Engineering
PhD
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
31

Oliveira, Pedro Miguel Martins de. "Benchmarking sobre técnicas de otimização para modelos de apoio à decisão na medicina intensiva." Master's thesis, 2015. http://hdl.handle.net/1822/39591.

Full text
Abstract:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação
Os modelos de apoio à decisão na medicina intensiva são desenvolvidos para apoiar as equipas médicas na tomada de decisão sobre os tratamentos a aplicar a um doente. Existem inúmeros sistemas de apoio à decisão (SAD) que foram desenvolvidos nas últimas décadas para os mais variados ambientes. Em muitos desses SADs, o Machine Learning é utilizado para dar resposta a um problema específico. No entanto, a otimização desses sistemas é particularmente difícil de aplicar devido à dinâmica, complexidade e naturezas multidisciplinares. Com isso, hoje em dia existe uma constante investigação e desenvolvimento de novos algoritmos capazes de extrair conhecimento tratado de grandes volumes de dados, obtendo assim melhores resultados preditivos do que os atuais algoritmos. Existe e emerge um vasto grupo de técnicas e modelos que melhor se adaptam à natureza e complexidade do problema. É nesse propósito que se insere este trabalho. Esta dissertação teve como principal objetivo identificar essas técnicas de otimização, avaliar, comparar e classificar aquelas que melhor podem responder às particularidades da Medicina Intensiva. Como exemplo foram analisados modelos Evolutionary Crisp Rule Learning, Lazy Learning, Evolutionary Fuzzy Rule Learning, Prototype Generation, Fuzzy Instance Based Learning, Decision Trees, Crisp Rule Learning, Neural Networks e Evolutionary Prototype Selection. De seguida foram efetuados alguns desenvolvimentos / testes de modo a aplicar a melhor técnica a um problema de cuidados intensivos, onde a técnicas Decision Trees Genetic Algorithm, Supervised Classifier System e KNNAdaptive obtiveram a melhor taxa de acuidade, mostrando assim a sua exequibilidade e capacidade de atuar em um ambiente real.
The decision support models in intensive care are developed to support medical staff in decision making about treatments to be applied to a patient. There are numerous systems for decision support (DSS) that have been developed in recent decades for a variety of environments. In many of these DSS, the Machine Learning is used to address a specific problem. However, the optimization of these systems is particularly difficult to apply due to the dynamic, complex and multidisciplinary nature. Thus, there is a constant research and development of new algorithms capable of extracting knowledge treated large volumes of data today, able to obtain better predictive results than current algorithms. In fact, emerges a large group of techniques and models that are best suited to the nature and complexity of the problem. This work is incorporated in this context. This dissertation aims to identify these optimization techniques, evaluate, compare and classify them in order to identify what are the best respond to the particularities of Critical Care Medicine. As an example several models were analyzed: Evolutionary Fuzzy Rule Learning, Lazy Learning, Evolutionary Crisp Rule Learning, Prototype Generation, Fuzzy Instance Based Learning, Decision Trees, Crisp Rule Learning, Neural Networks and Evolutionary Prototype Selection. Afterwards some developments / tests were made in order to apply the best technique to a problem of intensive care, where the Decision Trees Genetic Algorithm, Supervised Classifier System and KNNAdaptive obtained the most accurate rate, thus showing their feasibility and ability to work in a real environment.
APA, Harvard, Vancouver, ISO, and other styles
32

Pinheiro, Joana Marcelino. "Development of a zeolitic heat exchanger for heating applications." Doctoral thesis, 2019. http://hdl.handle.net/10773/25540.

Full text
Abstract:
The worldwide climate changes and the scarcity of natural resources have been driving measures to reinvent the energy system towards a low-carbon and sustainable model. Adsorption heat pumps (AHPs) are among the alternatives investigated for the creation of nearly zero energy buildings, as they may help to globally decarbonize the society. This work addresses various domains which are important for the research and development of AHPs, namely, experimental characterization of adsorbents, modeling and simulation of adsorption heating units, optimization of the AHPs design and operation, prototype design and benchmarking against more conventional solutions. The overall heating performance of several adsorbents - ETS-10, zeolites (13X, 4A and NaY), silica-gel, MOF CPO-27(Ni), and AQSOATM FAM-Z02 - for water AHPs was investigated under distinct geometric and operating conditions. Regarding ETS-10/water pair, adsorption equilibrium and kinetic properties were measured, along with the effective thermal conductivity and specific heat capacity of ETS-10. These results were used to model and simulate a tubular adsorbent heat exchanger (AHEx). The developed model contemplated material and energy balances, adsorption equilibrium, external heat transfer limitations, and intraparticle mass transport. Values of coefficient of performance (COP) and specific heating power (SHP) in the range 1.36-1.39 and 249-934 W kg−1 were obtained, respectively, for adsorbent bed thicknesses (δ) of 2-6 mm. Sensitivity studies showed that parameters δ and adsorbent regeneration temperature may influence considerably the cycle time (tcycle) and the cyclic adsorption loading swing (ΔWcycle). The ETS-10 was compared against well-known adsorbents like silica gel, zeolite 4A and zeolite 13X, for water AHPs, showing that it is outperformed by zeolite 13X, for bed regeneration, condensation and evaporation at 473 K, 333 K and 278 K, respectively. This was partly attributed to the higher amount of heat generated per cycle when using the pair zeolite 13X/water. For zeolite 13X particle diameters between 0.2 and 0.6 mm, values of COP = 1.48 and SHP = 1141–1254 W kg−1 were obtained. Aiming to reduce computational and numerical efforts in the simulations, the impact of considering some model simplifications while ensuring comparable predictions of the AHP performance for zeolite 13X/water pair was investigated. It was concluded that, e.g., the use of an average and fixed value of the intraparticle mass transfer coefficient is sufficient to predict reliable cycle performances. Since the presence of a binder in the formulation of the adsorbents may harm the adsorption loading and kinetics, the heating performance of commercial 13X and NaY zeolites, with and without binder, was compared for water AHPs, through modeling and simulations. The results unveiled that the performance of zeolite 13X is not significantly penalized by the presence of the binder. The binderless NaY surpassed zeolites 13X for regeneration, condensation, and evaporation temperatures of 398.15-448.15 K, 308.15-328.15 K and 278.15 K, respectively, achieving COP ≤ 1.53 and SHP ≤ 430 W kg-1, essentially due to its higher ΔWcycle. As boosting the market competitiveness of AHPs implies the development of optimized appliances, the potential of combining phenomenological modeling and statistical tools like design of experiments and response surface methodology (DoE/RSM) to aid efficient optimization of AHPs was demonstrated for the pair binderless zeolite NaY/water. A Box-Behnken design with four factors - time of adsorption and desorption, condensation temperature, heat source temperature, bed thickness - and three levels was considered, taking COP and SHP as response variables. The statistical outcomes from DoE/RSM included: (i) Pareto charts displaying the impact ranking of the factors upon COP and SHP, and (ii) polynomial equations to efficiently estimate both performance indicators as function of the factors and vice-versa. These models allowed to map the system performances in a broad range of conditions with a low number of simulations, and to select optimal combinations of geometric and operating parameters to meet pre-established performance requisites. Overall, these results provided insights into the great potential of DoE/RSM for building up optimized AHExs and advanced control strategies of AHPs. Given the myriad of potential applications claimed for metal-organic frameworks (MOFs), for which massive scientific investigation is ongoing, the potential of MOF CPO-27(Ni) for water adsorption heating was investigated in this work, with the aid of modeling and computational simulations. A customized solver and methodology for simulating adsorption heating cycles was developed in OpenFOAM, and validated using data from the literature. An improved AHEx design was considered, consisting of a tube surrounded by a coating composite of CPO-27(Ni)/copper foam. The obtained COPs and SHPs were, respectively, in the interval 1.16-1.39 and 1922-5130 W kg-1, for evaporation, condensation and bed regeneration temperatures of 278.15 K, 308.15 K and 368.15 K, respectively. Under these working conditions, the CPO-27(Ni) was surpassed by the benchmark adsorbent AQSOATM FAM-Z02, which was essentially attributed to lower ΔWcycle and slower intraparticle mass transfer kinetics of the MOF. An experimental installation combining an AHP and a gas water heater (GWH) that may be assembled to test the performance of several adsorbents was designed, and an experimental protocol prepared. Technical specifications of assorted components were defined and suppliers’ proposals analyzed, in order to estimate the budget for such prototype. Finally, a potential concept of an adsorption appliance for domestic hot water production (DHW) was presented and compared against the current Bosch heat pump water heater (HPWH Supraeco W). Despite the eco-friendliness of AHPs, these systems still raise considerable techno-economic challenges, since they require significant dimensions, as well as high complexity and price. In the whole, one concludes that the competitiveness of adsorption technology for DHW production strongly depends on the development of water adsorbents with better performance/price ratio, and on improved formulations like coatings, instead of beds with random particles of adsorbent.
As alterações climáticas e a escassez de recursos naturais têm motivado a criação de medidas para reinventar o sistema energético, rumo a uma economia mais sustentável. As bombas de calor por adsorção (AHP) fazem parte das alternativas investigadas para a criação de edifícios com necessidades energéticas quase nulas. Este trabalho abrange vários domínios com relevância para a investigação e desenvolvimento das AHP, nomeadamente, caraterização de adsorventes, modelação e simulação de unidades de aquecimento por adsorção, otimização do design e operação de AHP, prototipagem e comparação com tecnologias convencionais. Foram investigados os desempenhos de diversos adsorventes em AHP, considerando a água como adsorvato e diferentes condições de operação e de geometria. Os adsorventes selecionados foram o titanossilicato número 10 (ETS-10), três zeólitos (13X, 4A e NaY), a rede metalo-orgânica cristalina (MOF) CPO-27(Ni) e o fosfato de sílica-alumina AQSOATM FAM-Z02. No tocante ao par ETS-10/água, foram medidas isotérmicas de adsorção e propriedades cinéticas, assim como condutividades térmicas e capacidades caloríficas específicas do adsorvente. Estes resultados foram utilizados para modelar e simular um permutador de calor tubular contendo ETS-10. O modelo desenvolvido contemplou balanços de massa e energia, equilíbrio de adsorção, resistência externa à transferência de calor e transporte intraparticular de massa. Para espessuras de leito (δ) de ETS-10 entre 2 e 6 mm, obtiveram-se valores de coeficiente de performance (COP) e de potência específica de aquecimento (SHP) nos intervalos 1.36-1.39 e 249-934 W kg−1, respetivamente. Estudos de sensibilidade mostraram que parâmetros como o δ e a temperatura de regeneração do adsorvente podem influenciar consideravelmente o tempo de ciclo (tciclo) e a capacidade cíclica de adsorção (ΔWciclo) do sistema. O ETS-10 foi comparado com adsorventes bastante conhecidos, tais como, sílica gel e os zeólitos 4A e 13X, tendo-se concluído que o seu desempenho para fins de aquecimento é ultrapassado pelo do zeólito 13X, para regeneração de leito realizada a 473 K, e condensação e evaporação do refrigerante a 333 K e 278 K, respetivamente. Estes resultados foram, em parte, atribuídos a uma maior libertação de calor por ciclo, quando se usa o par 13X/água. Para tamanhos de partícula entre 0.2 e 0.6 mm, este par apresentou COP = 1.48 e SHP no intervalo 1141-1254 W kg−1. Com o objetivo de reduzir o esforço numérico e computacional em simulações, foi estudado o impacto de se introduzirem algumas simplificações no modelo, sem deixar de garantir as previsões razoáveis de desempenho das AHP. Por exemplo, a utilização de um valor médio fixo para o coeficiente intraparticular de transferência de massa é razoável na avaliação dos desempenhos nos ciclos de aquecimento. Uma vez que a presença de agentes ligantes na formulação de adsorventes pode diminuir a capacidade de adsorção e afetar a cinética, foram estudados os desempenhos de aquecimento de adsorventes zeolíticos comerciais (13X e NaY) com e sem ligantes. Os resultados, considerando água como adsorvato, indicaram que a existência de um ligante na formulação do zeólito 13X não afetava consideravelmente o seu desempenho. No âmbito deste estudo, verificou-se ainda que o zeólito NaY sem ligante é o adsorvente mais promissor para temperaturas de regeneração do leito, condensação e evaporação de 398.15-448.15 K, 308.15-328.15 K e 278.15 K, respetivamente, atingindo COP ≤ 1.53 e SHP ≤ 430 W kg-1, essencialmente devido a ΔWciclo mais elevado do que o dos zeólitos 13X. Dado que a otimização das AHP é importante para aumentar a sua competitividade, o potencial de combinar modelação fenomenológica com ferramentas estatísticas, tais como o desenho fatorial de experiências e a metodologia da superfície de resposta (DoE/RSM), foi estudado na otimização de AHP com o par zeólito NaY/água. Para tal, foi considerado o desenho de experiências de Box-Behnken com quatro fatores – tempo de adsorção e dessorção, temperatura de condensação, temperatura da fonte de aquecimento e espessura de leito – e três níveis, sendo COP e SHP as variáveis de resposta. Deste estudo obtiveram-se gráficos de Pareto, mostrando a importância dos diversos fatores no COP e no SHP, e equações polinomiais para estimar de forma expedita o COP e o SHP em função dos fatores e vice-versa. Estas equações permitiram mapear o desempenho da AHP numa ampla gama de condições com um número pequeno de simulações, e ainda identificar combinações ótimas de parâmetros geométricos e de operação para cumprir pré-requisitos de desempenho. Em suma, este estudo mostrou o grande potencial de DoE/RSM para desenvolver componentes mais otimizados e estratégias de controlo avançadas de AHP. Tendo em conta a miríade de potenciais aplicações que tem sido reivindicada para redes metalo-orgânicas cristalinas (MOFs), sobre os quais existe um grande foco da investigação científica, o potencial do MOF CPO-27(Ni) para aplicações de aquecimento por adsorção de água foi investigado usando ferramentas de modelação e simulação computacional. Para este efeito, foi desenvolvido em OpenFOAM um solver customizado e uma metodologia para simular ciclos de aquecimento por adsorção, que foram validados com dados da literatura. Neste estudo, considerou-se uma geometria de leito de adsorvente mais avançada, consistindo num tubo metálico revestido com um filme de um compósito de CPO-27(Ni)/espuma de cobre. Os COP e SHP foram, respetivamente, 1.16-1.39 e 1922-5130 W kg-1, para temperatura de evaporação, condensação e regeneração de leito de 278.15 K, 308.15 K e 368.15 K. Uma comparação deste MOF com o adsorvente de referência para AHP, nomeadamente AQSOATM FAM-Z02, permitiu concluir que o desempenho do CPO-27(Ni) é ultrapassado pelo do segundo, essencialmente devido ao ΔWciclo inferior e à transferência intraparticular de massa mais lenta do CPO-27(Ni). No contexto desta dissertação, foi ainda desenhada uma instalação experimental combinando uma AHP com um esquentador, que poderá ser montada proximamente para testar o desempenho de diversos adsorventes, tendo sido elaborado o respetivo protocolo. As especificações técnicas de diversos componentes para o protótipo foram definidas e foram analisadas propostas de vários fornecedores, a partir das quais se estimou o custo da instalação. Finalmente, foi desenhado um possível conceito de uma AHP para aquecimento de água doméstica, o qual foi comparado com a atual bomba de calor da Bosch para este fim (Supraeco W). Apesar dos benefícios ambientais das AHP, concluiu-se que estes sistemas suscitam ainda grandes desafios técnico-económicos, uma vez que exigem dimensões significativas, bem como complexidade e preço elevados. No cômputo geral, conclui-se que a competitividade da tecnologia de aquecimento de água doméstica por adsorção depende largamente do desenvolvimento de adsorventes de água com melhor rácio desempenho/preço e da aposta em formulações mais eficientes como, por exemplo, na preparação de filmes ao invés de enchimentos aleatórios de partículas de adsorvente.
Programa Doutoral em Engenharia da Refinação, Petroquímica e Química
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography