Academic literature on the topic 'Optimization Benchmarking'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Optimization Benchmarking.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Optimization Benchmarking"

1

Rojas-Labanda, Susana, and Mathias Stolpe. "Benchmarking optimization solvers for structural topology optimization." Structural and Multidisciplinary Optimization 52, no. 3 (May 17, 2015): 527–47. http://dx.doi.org/10.1007/s00158-015-1250-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tedford, Nathan P., and Joaquim R. R. A. Martins. "Benchmarking multidisciplinary design optimization algorithms." Optimization and Engineering 11, no. 1 (March 20, 2009): 159–83. http://dx.doi.org/10.1007/s11081-009-9082-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Moré, Jorge J., and Stefan M. Wild. "Benchmarking Derivative-Free Optimization Algorithms." SIAM Journal on Optimization 20, no. 1 (January 2009): 172–91. http://dx.doi.org/10.1137/080724083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ajani, Oladayo S., Abhishek Kumar, Rammohan Mallipeddi, Swagatam Das, and Ponnuthurai Nagaratnam Suganthan. "Benchmarking Optimization-Based Energy Disaggregation Algorithms." Energies 15, no. 5 (February 22, 2022): 1600. http://dx.doi.org/10.3390/en15051600.

Full text
Abstract:
Energy disaggregation (ED), with minimal infrastructure, can create energy awareness and thus promote energy efficiency by providing appliance-level consumption information. However, ED is highly ill-posed and gets complicated with increase in number and type of devices, similarity between devices, measurement errors, etc. To design, test, and benchmark ED algorithms, the availability of open-access energy consumption datasets is crucial. Most datasets in the literature suit data-intensive pattern-based ED algorithms. Recently, optimization-based ED algorithms that only require information regarding the operational states of the devices are being developed. However, the lack of standard datasets and appropriate evaluation metrics is hindering the development of reproducible state-of-the-art optimization-based ED algorithms. Therefore, in this paper, we propose a dataset with multiple instances that are representative of the different challenges posed by ED in practice. Performance indicators to empirically evaluate different optimization-based ED algorithms are summarized. In addition, baseline simulation results of the state-of-the-art optimization-based ED algorithms are presented. The developed dataset, summarization of different metrics, and baseline results are expected to provide a platform for researchers to develop novel optimization-based frameworks, in general, and evolutionary computation-based frameworks in particular to solve ED.
APA, Harvard, Vancouver, ISO, and other styles
5

Hendrix, Eligius M. T., and Algirdas Lančinskas. "On Benchmarking Stochastic Global Optimization Algorithms." Informatica 26, no. 4 (January 1, 2015): 649–62. http://dx.doi.org/10.15388/informatica.2015.69.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Korošec, Peter, and Tome Eftimov. "Multi-Objective Optimization Benchmarking Using DSCTool." Mathematics 8, no. 5 (May 22, 2020): 839. http://dx.doi.org/10.3390/math8050839.

Full text
Abstract:
By performing data analysis, statistical approaches are highly welcome to explore the data. Nowadays with the increases in computational power and the availability of big data in different domains, it is not enough to perform exploratory data analysis (descriptive statistics) to obtain some prior insights from the data, but it is a requirement to apply higher-level statistics that also require much greater knowledge from the user to properly apply them. One research area where proper usage of statistics is important is multi-objective optimization, where the performance of a newly developed algorithm should be compared with the performances of state-of-the-art algorithms. In multi-objective optimization, we are dealing with two or more usually conflicting objectives, which result in high dimensional data that needs to be analyzed. In this paper, we present a web-service-based e-Learning tool called DSCTool that can be used for performing a proper statistical analysis for multi-objective optimization. The tool does not require any special statistics knowledge from the user. Its usage and the influence of a proper statistical analysis is shown using data taken from a benchmarking study performed at the 2018 IEEE CEC (The IEEE Congress on Evolutionary Computation) is appropriate. Competition on Evolutionary Many-Objective Optimization.
APA, Harvard, Vancouver, ISO, and other styles
7

Doerr, Carola, Furong Ye, Naama Horesh, Hao Wang, Ofer M. Shir, and Thomas Bäck. "Benchmarking discrete optimization heuristics with IOHprofiler." Applied Soft Computing 88 (March 2020): 106027. http://dx.doi.org/10.1016/j.asoc.2019.106027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dolan, Elizabeth D., and Jorge J. Moré. "Benchmarking optimization software with performance profiles." Mathematical Programming 91, no. 2 (January 1, 2002): 201–13. http://dx.doi.org/10.1007/s101070100263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Liao, Yu-Ching, Chenyun Pan, and Azad Naeemi. "Benchmarking and Optimization of Spintronic Memory Arrays." IEEE Journal on Exploratory Solid-State Computational Devices and Circuits 6, no. 1 (June 2020): 9–17. http://dx.doi.org/10.1109/jxcdc.2020.2999270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Auger, Anne, Nikolaus Hansen, and Marc Schoenauer. "Benchmarking of Continuous Black Box Optimization Algorithms." Evolutionary Computation 20, no. 4 (December 2012): 481. http://dx.doi.org/10.1162/evco_e_00091.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Optimization Benchmarking"

1

Samuelsson, Oscar. "Benchmarking Global Optimization Algorithms for Core Prediction Identification." Thesis, Linköpings universitet, Reglerteknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-61253.

Full text
Abstract:
Mathematical modeling has evolved from being a rare event to becoming a standardapproach for investigating complex biological interactions. However, variationsand uncertainties in experimental data usually result in uncertain estimatesof the parameters of the model. It is possible to draw conclusions from the modeldespite uncertain parameters by using core predictions. A core prediction is amodel property which is valid for all parameter vectors that fit data at an acceptablecost. By validating the core prediction with additional experimentalmeasurements one can draw conclusions about the overall model despite uncertainparameter values. A prerequisite for identifying a core prediction is a global searchfor all acceptable parameter vectors. Global optimization methods are normallyconstructed to search for a single optimal parameter vector, but methods searchingfor several acceptable parameter vectors are required here.In this thesis, two metaheuristic optimization algorithms have been evaluated,namely Simulated annealing and Scatter search. In order to compare their differences,a set of functions has been implemented in Matlab. The Matlab functionsinclude a statistical framework which is used to discard poorly tuned optimizationalgorithms, five performance measures reflecting the different objectives of locatingone or several acceptable parameter vectors, and a number of test functionsmeant to reflect high-dimensional, multimodal problems. In addition to the testfunctions, a biological benchmark model is included.The statistical framework has been used to evaluate the performance of thetwo algorithms with the objective of locating one and several acceptable parametervectors. For the objective of locating one acceptable parameter vector, theresults indicate that Scatter search performed better than Simulated Annealing.The results also indicate that different search objectives require differently tunedalgorithms. Furthermore, the results show that test functions with a suitabledegree of difficulty are not a trivial task to obtain. A verification of the tuned optimizationalgorithms has been conducted on the benchmark model. The resultsare somewhat contradicting and in this specific case, it is not possible to claimthat good configurations on test functions remain good in real applications.
APA, Harvard, Vancouver, ISO, and other styles
2

Ait, Elhara Ouassim. "Stochastic Black-Box Optimization and Benchmarking in Large Dimensions." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS211/document.

Full text
Abstract:
Etant donné le coût élevé qui accompagne, en général, la résolution de problème en grandes dimensions, notamment quand il s'agit de problèmes réels; le recours à des fonctions dite benchmarks et une approche communément utilisée pour l'évaluation d'algorithmes avec un coût minime. Il est alors question de savoir identifier les formes par lesquelles ces problèmes se présentent pour pouvoir les reproduire dans ces benchmarks. Une question dont la réponse est difficile vu la variété de ces problèmes, leur complexité, et la difficulté de tous les décrire pertinemment. L'idée est alors d'examiner les difficultés qui accompagnent généralement ces problème, ceci afin de les reproduire dans les fonctions benchmarks et évaluer la capacité des algorithmes à les résoudre. Dans le cas des problèmes de grandes dimensions, il serait pratique de pouvoir simplement étendre les benchmarks déjà utilisés pour les dimensions moins importantes. Cependant, il est important de prendre en compte les contraintes additionnelles qui accompagnent les problèmes de grandes dimensions, notamment ceux liés à la complexité d'évaluer ces fonctions benchmark. Idéalement, les fonctions benchmark en grandes dimension garderaient la majorité des propriétés de leurs contreparties en dimensions réduite tout en ayant un coût raisonnable. Les problèmes benchmark sont souvent classifiés en catégories suivant les difficultés qu'ils présentent. Même dans un scénario en boîte-noire où ce genre d'information n'est pas partagée avec l'algorithme, il reste important et pertinent d'avoir cette classification. Ceci permet d'identifier les lacunes d'un algorithme vis à vis d'une difficulté en particulier, et donc de plus facilement pouvoir l'améliorer. Une autre question importante à se poser en modélisant des problèmes de grandes dimensions est la pertinence des variables. En effet, quand la dimension est relativement petite, il n'est pas rare de voir toutes les variables contribuer à définir la qualité d'une solution. Cependant, quand la dimension grandit, il arrive souvent que des variables deviennent redondantes voire inutiles; notamment vu la difficulté de trouver une représentation minimaliste du problème. Ce dernier point encourage la conception et d'algorithmes et de fonctions benchmark traitant cette classe de problèmes. Dans cette thèse, on répond, principalement, à trois questions rencontrées dans l'optimisation stochastique continue en grandes dimensions : 1. Comment concevoir une méthode d'adaptation du pas d'une stratégie d'évolution qui, à la fois, est efficace et a un coût en calculs raisonnable ? 2. Comment construire et généraliser des fonctions à faible dimension effective ? 3. Comment étendre un ensemble de fonctions benchmarks pour des cas de grandes dimensions en préservant leurs propriétés sans avoir des caractéristiques qui soient exploitables ?
Because of the generally high computational costs that come with large-scale problems, more so on real world problems, the use of benchmarks is a common practice in algorithm design, algorithm tuning or algorithm choice/evaluation. The question is then the forms in which these real-world problems come. Answering this question is generally hard due to the variety of these problems and the tediousness of describing each of them. Instead, one can investigate the commonly encountered difficulties when solving continuous optimization problems. Once the difficulties identified, one can construct relevant benchmark functions that reproduce these difficulties and allow assessing the ability of algorithms to solve them. In the case of large-scale benchmarking, it would be natural and convenient to build on the work that was already done on smaller dimensions, and be able to extend it to larger ones. When doing so, we must take into account the added constraints that come with a large-scale scenario. We need to be able to reproduce, as much as possible, the effects and properties of any part of the benchmark that needs to be replaced or adapted for large-scales. This is done in order for the new benchmarks to remain relevant. It is common to classify the problems, and thus the benchmarks, according to the difficulties they present and properties they possess. It is true that in a black-box scenario, such information (difficulties, properties...) is supposed unknown to the algorithm. However, in a benchmarking setting, this classification becomes important and allows to better identify and understand the shortcomings of a method, and thus make it easier to improve it or alternatively to switch to a more efficient one (one needs to make sure the algorithms are exploiting this knowledge when solving the problems). Thus the importance of identifying the difficulties and properties of the problems of a benchmarking suite and, in our case, preserving them. One other question that rises particularly when dealing with large-scale problems is the relevance of the decision variables. In a small dimension problem, it is common to have all variable contribute a fair amount to the fitness value of the solution or, at least, to be in a scenario where all variables need to be optimized in order to reach high quality solutions. This is however not always the case in large-scales; with the increasing number of variables, some of them become redundant or groups of variables can be replaced with smaller groups since it is then increasingly difficult to find a minimalistic representation of a problem. This minimalistic representation is sometimes not even desired, for example when it makes the resulting problem more complex and the trade-off with the increase in number of variables is not favorable, or larger numbers of variables and different representations of the same features within a same problem allow a better exploration. This encourages the design of both algorithms and benchmarks for this class of problems, especially if such algorithms can take advantage of the low effective dimensionality of the problems, or, in a complete black-box scenario, cost little to test for it (low effective dimension) and optimize assuming a small effective dimension. In this thesis, we address three questions that generally arise in stochastic continuous black-box optimization and benchmarking in high dimensions: 1. How to design cheap and yet efficient step-size adaptation mechanism for evolution strategies? 2. How to construct and generalize low effective dimension problems? 3. How to extend a low/medium dimension benchmark to large dimensions while remaining computationally reasonable, non-trivial and preserving the properties of the original problem?
APA, Harvard, Vancouver, ISO, and other styles
3

Bendahmane, El Hachemi. "Introduction de fonctionnalités d'auto-optimisation dans une architecture de selfbenchmarking." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00782233.

Full text
Abstract:
Le Benchmarking des systèmes client-serveur implique des infrastructures techniques réparties complexes, dont la gestion nécessite une approche autonomique. Cette gestion s'appuie sur une suite d'étapes, observation, analyse et rétroaction, qui correspond au principe d'une boucle de contrôle autonome. Des travaux antérieurs dans le domaine du test de performances ont montré comment introduire des fonctionnalités de test autonome par le biais d'une injection de charge auto-régulée. L'objectif de cette thèse est de suivre cette démarche de calcul autonome (autonomic computing) en y introduisant des fonctionnalités d'optimisation autonome. On peut ainsi obtenir automatiquement des résultats de benchmarks fiables et comparables, mettant en oeuvre l'ensemble des étapes de self-benchmarking. Notre contribution est double. D'une part, nous proposons un algorithme original pour l'optimisation dans un contexte de test de performance, qui vise à diminuer le nombre de solutions potentielles à tester, moyennant une hypothèse sur la forme de la fonction qui lie la valeur des paramètres à la performance mesurée. Cet algorithme est indépendant du système à optimiser. Il manipule des paramètres entiers, dont les valeurs sont comprises dans un intervalle donné, avec une granularité de valeur donnée. D'autre part, nous montrons une approche architecturale à composants et une organisation du benchmark automatique en plusieurs boucles de contrôle autonomes (détection de saturation, injection de charge, calcul d'optimisation), coordonnées de manière faiblement couplée via un mode de communication asynchrone de type publication-souscription. Complétant un canevas logiciel à composants pour l'injection de charge auto-régulée, nous y ajoutons des composants pour reparamétrer et redémarrer automatiquement le système à optimiser.Deux séries d'expérimentations ont été menées pour valider notre dispositif d'auto-optimisation. La première série concerne une application web de type achat en ligne, déployée sur un serveur d'application JavaEE. La seconde série concerne une application à trois tiers effectifs (WEB, métier (EJB JOnAS) et base de données) clusterSample. Les trois tiers sont sur des machines physiques distinctes.
APA, Harvard, Vancouver, ISO, and other styles
4

Yilmaz, Eftun. "Benchmarking of Optimization Modules for Two Wind Farm Design Software Tools." Thesis, Högskolan på Gotland, Institutionen för kultur, energi och miljö, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hgo:diva-1946.

Full text
Abstract:
Optimization of wind farm layout is an expensive and complex task involving several engineering challenges. The layout of any wind farm directly impacts profitability and return of investment. Several software optimization modules in line with wind farm design tools in industry is currently attempting to place the turbines in locations with good wind resources while adhering to the constraints of a defined objective function. Assessment of these software tools needs to be performed clearly for assessing different tools in wind farm layout design process. However, there is still not a clear demonstration of benchmarking and comparison of these software tools even for simple test cases. This work compares two different optimization software namely openWind and WindPRO commercial software tools mutually.
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Xi. "Benchmark generation in a new framework /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?IELM%202007%20LI.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Goldberg, Benjamin. "Benchmarking Traffic Control Algorithms on a Packet Switched Network." Scholarship @ Claremont, 2015. http://scholarship.claremont.edu/cmc_theses/1192.

Full text
Abstract:
Traffic congestion has tremendous economic and environmental costs. One way to reduce this congestion is to implement more intelligent traffic light systems. There is significant existing research into different algorithms for controlling traffic lights, but they all use separate systems for performance testing. This paper presents the Rush Hour system, which models a network of roadways and traffic lights as a network of connected routers and endnodes. Several traffic switching algorithms are then tested on the Rush Hour system. As expected, we found that the more intelligent systems were effective at reducing congestion at low and medium levels of traffic. However, they were comparable to more naive algorithms at higher levels of traffic.
APA, Harvard, Vancouver, ISO, and other styles
7

Randau, Simon [Verfasser]. "Benchmarking of SSB, reference cells and optimization of the cathode composite / Simon Randau." Gieߟen : Universitätsbibliothek, 2021. http://d-nb.info/1236385675/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kumar, Vachan. "Modeling and optimization approaches for benchmarking emerging on-chip and off-chip interconnect technologies." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/54280.

Full text
Abstract:
Modeling approaches are developed to optimize emerging on-chip and off-chip electrical interconnect technologies and benchmark them against conventional technologies. While transistor scaling results in an improvement in power and performance, interconnect scaling results in a degradation in performance and electromigration reliability. Although graphene potentially has superior transport properties compared to copper, it is shown that several technology improvements like smooth edges, edge doping, good contacts, and good substrates are essential for graphene to outperform copper in high performance on-chip interconnect applications. However, for low power applications, the low capacitance of graphene results in 31\% energy savings compared to copper interconnects, for a fixed performance. Further, for characterization of the circuit parameters of multi-layer graphene, multi-conductor transmission line models that account for an alignment margin and finite width of the contact are developed. Although it is essential to push for an improvement in chip performance by improving on-chip interconnects, devices, and architectures, the system level performance can get severely limited by the bandwidth of off-chip interconnects. As a result, three dimensional integration and airgap interconnects are studied as potential replacements for conventional off-chip interconnects. The key parameters that limit the performance of a 3D IC are identified as the Through Silicon Via (TSV) capacitance, driver resistance, and on-chip wire resistance on the driver side. Further, the impact of on-chip wires on the performance of 3D ICs is shown to be more pronounced at advanced technology nodes and when the TSV diameter is scaled down. Airgap interconnects are shown to improve aggregate bandwidth by 3x to 5x for backplane and Printed Circuit Board (PCB) links, and by 2x for silicon interposer links, at comparable energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
9

Schütze, Lars, and Jeronimo Castrillon. "Analyzing State-of-the-Art Role-based Programming Languages." ACM, 2017. https://tud.qucosa.de/id/qucosa%3A73196.

Full text
Abstract:
With ubiquitous computing, autonomous cars, and cyber-physical systems (CPS), adaptive software becomes more and more important as computing is increasingly context-dependent. Role-based programming has been proposed to enable adaptive software design without the problem of scattering the context-dependent code. Adaptation is achieved by having objects play roles during runtime. With every role, the object's behavior is modified to adapt to the given context. In recent years, many role-based programming languages have been developed. While they greatly differ in the set of supported features, they all incur in large runtime overheads, resulting in inferior performance. The increased variability and expressiveness of the programming languages have a direct impact on the run-time and memory consumption. In this paper we provide a detailed analysis of state-of-the-art role-based programming languages, with emphasis on performance bottlenecks. We also provide insight on how to overcome these problems.
APA, Harvard, Vancouver, ISO, and other styles
10

Bjäreholt, Johan. "RISC-V Compiler Performance:A Comparison between GCC and LLVM/clang." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14659.

Full text
Abstract:
RISC-V is a new open-source instruction set architecture (ISA) that in De-cember 2016 manufactured its rst mass-produced processors. It focuses onboth eciency and performance and diers from other open-source architec-tures by not having a copyleft license permitting vendors to freely design,manufacture and sell RISC-V chips without any fees nor having to sharetheir modications on the reference implementations of the architecture.The goal of this thesis is to evaluate the performance of the GCC andLLVM/clang compilers support for the RISC-V target and their ability tooptimize for the architecture. The performance will be evaluated from ex-ecuting the CoreMark and Dhrystone benchmarks are both popular indus-try standard programs for evaluating performance on embedded processors.They will be run on both the GCC and LLVM/clang compilers on dierentoptimization levels and compared in performance per clock to the ARM archi-tecture which is mature yet rather similar to RISC-V. The compiler supportfor the RISC-V target is still in development and the focus of this thesis willbe the current performance dierences between the GCC and LLVM com-pilers on this architecture. The platform we will execute the benchmarks onwil be the Freedom E310 processor on the SiFive HiFive1 board for RISC-Vand a ARM Cortex-M4 processor by Freescale on the Teensy 3.6 board. TheFreedom E310 is almost identical to the reference Berkeley Rocket RISC-Vdesign and the ARM Coretex-M4 processor has a similar clock speed and isaimed at a similar target audience.The results presented that the -O2 and -O3 optimization levels on GCCfor RISC-V performed very well in comparison to our ARM reference. Onthe lower -O1 optimization level and -O0 which is no optimizations and -Oswhich is -O0 with optimizations for generating a smaller executable code sizeGCC performs much worse than ARM at 46% of the performance at -O1,8.2% at -Os and 9.3% at -O0 on the CoreMark benchmark with similar resultsin Dhrystone except on -O1 where it performed as well as ARM. When turn-ing o optimizations (-O0) GCC for RISC-V was 9.2% of the performanceon ARM in CoreMark and 11% in Dhrystone which was unexpected andneeds further investigation. LLVM/clang on the other hand crashed whentrying to compile our CoreMark benchmark and on Dhrystone the optimiza-tion options made a very minor impact on performance making it 6.0% theperformance of GCC on -O3 and 5.6% of the performance of ARM on -O3, soeven with optimizations it was still slower than GCC without optimizations.In conclusion the performance of RISC-V with the GCC compiler onthe higher optimization levels performs very well considering how young theRISC-V architecture is. It does seems like there could be room for improvement on the lower optimization levels however which in turn could also pos-sibly increase the performance of the higher optimization levels. With theLLVM/clang compiler on the other hand a lot of work needs to be done tomake it competetive in both performance and stability with the GCC com-piler and other architectures. Why the -O0 optimization is so considerablyslower on RISC-V than on ARM was also very unexpected and needs furtherinvestigation.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Optimization Benchmarking"

1

Energy and process optimization and benchmarking of army industrial processes. Champaign, IL: [US Army Corps of Engineers, Engineer Research and Development Center], Construction Engineering Research Laboratory, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Financial and Economic Optimization of Water Main Replacement Programs. American Water Works Association, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

(Editor), Andrzej W. Ordys, Damien Uduehi (Editor), and Michael A. Johnson (Editor), eds. Process Control Performance Assessment: From Theory to Implementation (Advances in Industrial Control). Springer, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Optimization Benchmarking"

1

Young, Jeffrey S. "Benchmarking and Optimization." In Trauma Centers, 177–80. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-34607-2_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Guihot, Hervé. "Benchmarking and Profiling." In Pro Android Apps Performance Optimization, 163–76. Berkeley, CA: Apress, 2012. http://dx.doi.org/10.1007/978-1-4302-4000-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhao, Liutao, Wanling Gao, and Yi Jin. "Revisiting Benchmarking Principles and Methodologies for Big Data Benchmarking." In Big Data Benchmarks, Performance Optimization, and Emerging Hardware, 3–9. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-29006-5_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xu, Jun. "Case Study: Benchmarking Tools." In Block Trace Analysis and Storage System Optimization, 115–42. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3928-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Han, Rui, Xiaoyi Lu, and Jiangtao Xu. "On Big Data Benchmarking." In Big Data Benchmarks, Performance Optimization, and Emerging Hardware, 3–18. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-13021-7_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Onodera, Akito, Kazuhiko Komatsu, Soya Fujimoto, Yoko Isobe, Masayuki Sato, and Hiroaki Kobayashi. "Optimization of the Himeno Benchmark for SX-Aurora TSUBASA." In Benchmarking, Measuring, and Optimizing, 127–43. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-71058-3_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pham, Trong-Ton, and Dennis Mintah Djan. "Deep Reinforcement Learning for Auto-optimization of I/O Accelerator Parameters." In Benchmarking, Measuring, and Optimizing, 187–203. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-49556-5_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shcherbina, Oleg, Arnold Neumaier, Djamila Sam-Haroud, Xuan-Ha Vu, and Tuan-Viet Nguyen. "Benchmarking Global Optimization and Constraint Satisfaction Codes." In Global Optimization and Constraint Satisfaction, 211–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39901-8_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hao, Tianshu, and Ziping Zheng. "The Implementation and Optimization of Matrix Decomposition Based Collaborative Filtering Task on X86 Platform." In Benchmarking, Measuring, and Optimizing, 110–15. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-49556-5_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Köppen, Mario. "On the Benchmarking of Multiobjective Optimization Algorithm." In Lecture Notes in Computer Science, 379–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45224-9_53.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Optimization Benchmarking"

1

Gallagher, Marcus R. "Black-box optimization benchmarking." In the 11th annual conference companion. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1570256.1570318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gallagher, Marcus R. "Black-box optimization benchmarking." In the 11th annual conference companion. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1570256.1570332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mersmann, Olaf, Heike Trautmann, Boris Naujoks, and Claus Weihs. "Benchmarking evolutionary multiobjective optimization algorithms." In 2010 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2010. http://dx.doi.org/10.1109/cec.2010.5586241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Stefek, A. "Benchmarking of heuristic optimization methods." In 2011 14th International Conference on Mechatronics. IEEE, 2011. http://dx.doi.org/10.1109/mechatron.2011.5961068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

XIE, A. S., and D. X. LIU. "Inspirations for Optimization based on Benchmarking." In 2017 International Seminar on Artificial Intelligence, Networking and Information Technology (ANIT 2017). Paris, France: Atlantis Press, 2018. http://dx.doi.org/10.2991/anit-17.2018.30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Valin, Pierre, and David Boily. "Truncated Dempster-Shafer optimization and benchmarking." In AeroSense 2000, edited by Belur V. Dasarathy. SPIE, 2000. http://dx.doi.org/10.1117/12.381636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Doerr, Carola, Furong Ye, Naama Horesh, Hao Wang, Ofer M. Shir, and Thomas Bäck. "Benchmarking discrete optimization heuristics with IOHprofiler." In GECCO '19: Genetic and Evolutionary Computation Conference. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3319619.3326810.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Eftimov, Tome, and Peter Korošec. "Robust benchmarking for multi-objective optimization." In GECCO '21: Genetic and Evolutionary Computation Conference. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3449726.3463299.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Itoko, Toshinari, and Rudy Raymond. "Sampling Strategy Optimization for Randomized Benchmarking." In 2021 IEEE International Conference on Quantum Computing and Engineering (QCE). IEEE, 2021. http://dx.doi.org/10.1109/qce52317.2021.00036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pošik, Petr. "BBOB-benchmarking the DIRECT global optimization algorithm." In the 11th annual conference companion. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1570256.1570323.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Optimization Benchmarking"

1

Dolan, E. D., and J. J. More. Benchmarking optimization software with COPS. Office of Scientific and Technical Information (OSTI), January 2001. http://dx.doi.org/10.2172/775270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dolan, E. D., J. J. More, and T. S. Munson. Benchmarking optimization software with COPS 3.0. Office of Scientific and Technical Information (OSTI), May 2004. http://dx.doi.org/10.2172/834714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Parekh, Ojas D., Jeremy D. Wendt, Luke Shulenburger, Andrew J. Landahl, Jonathan Edward Moussa, and John B. Aidun. Benchmarking Adiabatic Quantum Optimization for Complex Network Analysis. Office of Scientific and Technical Information (OSTI), April 2015. http://dx.doi.org/10.2172/1459086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography