Academic literature on the topic 'Optimization Benchmarking'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Optimization Benchmarking.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Optimization Benchmarking"
Rojas-Labanda, Susana, and Mathias Stolpe. "Benchmarking optimization solvers for structural topology optimization." Structural and Multidisciplinary Optimization 52, no. 3 (May 17, 2015): 527–47. http://dx.doi.org/10.1007/s00158-015-1250-z.
Full textTedford, Nathan P., and Joaquim R. R. A. Martins. "Benchmarking multidisciplinary design optimization algorithms." Optimization and Engineering 11, no. 1 (March 20, 2009): 159–83. http://dx.doi.org/10.1007/s11081-009-9082-6.
Full textMoré, Jorge J., and Stefan M. Wild. "Benchmarking Derivative-Free Optimization Algorithms." SIAM Journal on Optimization 20, no. 1 (January 2009): 172–91. http://dx.doi.org/10.1137/080724083.
Full textAjani, Oladayo S., Abhishek Kumar, Rammohan Mallipeddi, Swagatam Das, and Ponnuthurai Nagaratnam Suganthan. "Benchmarking Optimization-Based Energy Disaggregation Algorithms." Energies 15, no. 5 (February 22, 2022): 1600. http://dx.doi.org/10.3390/en15051600.
Full textHendrix, Eligius M. T., and Algirdas Lančinskas. "On Benchmarking Stochastic Global Optimization Algorithms." Informatica 26, no. 4 (January 1, 2015): 649–62. http://dx.doi.org/10.15388/informatica.2015.69.
Full textKorošec, Peter, and Tome Eftimov. "Multi-Objective Optimization Benchmarking Using DSCTool." Mathematics 8, no. 5 (May 22, 2020): 839. http://dx.doi.org/10.3390/math8050839.
Full textDoerr, Carola, Furong Ye, Naama Horesh, Hao Wang, Ofer M. Shir, and Thomas Bäck. "Benchmarking discrete optimization heuristics with IOHprofiler." Applied Soft Computing 88 (March 2020): 106027. http://dx.doi.org/10.1016/j.asoc.2019.106027.
Full textDolan, Elizabeth D., and Jorge J. Moré. "Benchmarking optimization software with performance profiles." Mathematical Programming 91, no. 2 (January 1, 2002): 201–13. http://dx.doi.org/10.1007/s101070100263.
Full textLiao, Yu-Ching, Chenyun Pan, and Azad Naeemi. "Benchmarking and Optimization of Spintronic Memory Arrays." IEEE Journal on Exploratory Solid-State Computational Devices and Circuits 6, no. 1 (June 2020): 9–17. http://dx.doi.org/10.1109/jxcdc.2020.2999270.
Full textAuger, Anne, Nikolaus Hansen, and Marc Schoenauer. "Benchmarking of Continuous Black Box Optimization Algorithms." Evolutionary Computation 20, no. 4 (December 2012): 481. http://dx.doi.org/10.1162/evco_e_00091.
Full textDissertations / Theses on the topic "Optimization Benchmarking"
Samuelsson, Oscar. "Benchmarking Global Optimization Algorithms for Core Prediction Identification." Thesis, Linköpings universitet, Reglerteknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-61253.
Full textAit, Elhara Ouassim. "Stochastic Black-Box Optimization and Benchmarking in Large Dimensions." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS211/document.
Full textBecause of the generally high computational costs that come with large-scale problems, more so on real world problems, the use of benchmarks is a common practice in algorithm design, algorithm tuning or algorithm choice/evaluation. The question is then the forms in which these real-world problems come. Answering this question is generally hard due to the variety of these problems and the tediousness of describing each of them. Instead, one can investigate the commonly encountered difficulties when solving continuous optimization problems. Once the difficulties identified, one can construct relevant benchmark functions that reproduce these difficulties and allow assessing the ability of algorithms to solve them. In the case of large-scale benchmarking, it would be natural and convenient to build on the work that was already done on smaller dimensions, and be able to extend it to larger ones. When doing so, we must take into account the added constraints that come with a large-scale scenario. We need to be able to reproduce, as much as possible, the effects and properties of any part of the benchmark that needs to be replaced or adapted for large-scales. This is done in order for the new benchmarks to remain relevant. It is common to classify the problems, and thus the benchmarks, according to the difficulties they present and properties they possess. It is true that in a black-box scenario, such information (difficulties, properties...) is supposed unknown to the algorithm. However, in a benchmarking setting, this classification becomes important and allows to better identify and understand the shortcomings of a method, and thus make it easier to improve it or alternatively to switch to a more efficient one (one needs to make sure the algorithms are exploiting this knowledge when solving the problems). Thus the importance of identifying the difficulties and properties of the problems of a benchmarking suite and, in our case, preserving them. One other question that rises particularly when dealing with large-scale problems is the relevance of the decision variables. In a small dimension problem, it is common to have all variable contribute a fair amount to the fitness value of the solution or, at least, to be in a scenario where all variables need to be optimized in order to reach high quality solutions. This is however not always the case in large-scales; with the increasing number of variables, some of them become redundant or groups of variables can be replaced with smaller groups since it is then increasingly difficult to find a minimalistic representation of a problem. This minimalistic representation is sometimes not even desired, for example when it makes the resulting problem more complex and the trade-off with the increase in number of variables is not favorable, or larger numbers of variables and different representations of the same features within a same problem allow a better exploration. This encourages the design of both algorithms and benchmarks for this class of problems, especially if such algorithms can take advantage of the low effective dimensionality of the problems, or, in a complete black-box scenario, cost little to test for it (low effective dimension) and optimize assuming a small effective dimension. In this thesis, we address three questions that generally arise in stochastic continuous black-box optimization and benchmarking in high dimensions: 1. How to design cheap and yet efficient step-size adaptation mechanism for evolution strategies? 2. How to construct and generalize low effective dimension problems? 3. How to extend a low/medium dimension benchmark to large dimensions while remaining computationally reasonable, non-trivial and preserving the properties of the original problem?
Bendahmane, El Hachemi. "Introduction de fonctionnalités d'auto-optimisation dans une architecture de selfbenchmarking." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00782233.
Full textYilmaz, Eftun. "Benchmarking of Optimization Modules for Two Wind Farm Design Software Tools." Thesis, Högskolan på Gotland, Institutionen för kultur, energi och miljö, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hgo:diva-1946.
Full textLi, Xi. "Benchmark generation in a new framework /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?IELM%202007%20LI.
Full textGoldberg, Benjamin. "Benchmarking Traffic Control Algorithms on a Packet Switched Network." Scholarship @ Claremont, 2015. http://scholarship.claremont.edu/cmc_theses/1192.
Full textRandau, Simon [Verfasser]. "Benchmarking of SSB, reference cells and optimization of the cathode composite / Simon Randau." Gießen : Universitätsbibliothek, 2021. http://d-nb.info/1236385675/34.
Full textKumar, Vachan. "Modeling and optimization approaches for benchmarking emerging on-chip and off-chip interconnect technologies." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/54280.
Full textSchütze, Lars, and Jeronimo Castrillon. "Analyzing State-of-the-Art Role-based Programming Languages." ACM, 2017. https://tud.qucosa.de/id/qucosa%3A73196.
Full textBjäreholt, Johan. "RISC-V Compiler Performance:A Comparison between GCC and LLVM/clang." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14659.
Full textBooks on the topic "Optimization Benchmarking"
Energy and process optimization and benchmarking of army industrial processes. Champaign, IL: [US Army Corps of Engineers, Engineer Research and Development Center], Construction Engineering Research Laboratory, 2006.
Find full textFinancial and Economic Optimization of Water Main Replacement Programs. American Water Works Association, 2001.
Find full text(Editor), Andrzej W. Ordys, Damien Uduehi (Editor), and Michael A. Johnson (Editor), eds. Process Control Performance Assessment: From Theory to Implementation (Advances in Industrial Control). Springer, 2007.
Find full textBook chapters on the topic "Optimization Benchmarking"
Young, Jeffrey S. "Benchmarking and Optimization." In Trauma Centers, 177–80. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-34607-2_15.
Full textGuihot, Hervé. "Benchmarking and Profiling." In Pro Android Apps Performance Optimization, 163–76. Berkeley, CA: Apress, 2012. http://dx.doi.org/10.1007/978-1-4302-4000-6_6.
Full textZhao, Liutao, Wanling Gao, and Yi Jin. "Revisiting Benchmarking Principles and Methodologies for Big Data Benchmarking." In Big Data Benchmarks, Performance Optimization, and Emerging Hardware, 3–9. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-29006-5_1.
Full textXu, Jun. "Case Study: Benchmarking Tools." In Block Trace Analysis and Storage System Optimization, 115–42. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3928-5_5.
Full textHan, Rui, Xiaoyi Lu, and Jiangtao Xu. "On Big Data Benchmarking." In Big Data Benchmarks, Performance Optimization, and Emerging Hardware, 3–18. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-13021-7_1.
Full textOnodera, Akito, Kazuhiko Komatsu, Soya Fujimoto, Yoko Isobe, Masayuki Sato, and Hiroaki Kobayashi. "Optimization of the Himeno Benchmark for SX-Aurora TSUBASA." In Benchmarking, Measuring, and Optimizing, 127–43. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-71058-3_8.
Full textPham, Trong-Ton, and Dennis Mintah Djan. "Deep Reinforcement Learning for Auto-optimization of I/O Accelerator Parameters." In Benchmarking, Measuring, and Optimizing, 187–203. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-49556-5_19.
Full textShcherbina, Oleg, Arnold Neumaier, Djamila Sam-Haroud, Xuan-Ha Vu, and Tuan-Viet Nguyen. "Benchmarking Global Optimization and Constraint Satisfaction Codes." In Global Optimization and Constraint Satisfaction, 211–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39901-8_16.
Full textHao, Tianshu, and Ziping Zheng. "The Implementation and Optimization of Matrix Decomposition Based Collaborative Filtering Task on X86 Platform." In Benchmarking, Measuring, and Optimizing, 110–15. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-49556-5_11.
Full textKöppen, Mario. "On the Benchmarking of Multiobjective Optimization Algorithm." In Lecture Notes in Computer Science, 379–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45224-9_53.
Full textConference papers on the topic "Optimization Benchmarking"
Gallagher, Marcus R. "Black-box optimization benchmarking." In the 11th annual conference companion. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1570256.1570318.
Full textGallagher, Marcus R. "Black-box optimization benchmarking." In the 11th annual conference companion. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1570256.1570332.
Full textMersmann, Olaf, Heike Trautmann, Boris Naujoks, and Claus Weihs. "Benchmarking evolutionary multiobjective optimization algorithms." In 2010 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2010. http://dx.doi.org/10.1109/cec.2010.5586241.
Full textStefek, A. "Benchmarking of heuristic optimization methods." In 2011 14th International Conference on Mechatronics. IEEE, 2011. http://dx.doi.org/10.1109/mechatron.2011.5961068.
Full textXIE, A. S., and D. X. LIU. "Inspirations for Optimization based on Benchmarking." In 2017 International Seminar on Artificial Intelligence, Networking and Information Technology (ANIT 2017). Paris, France: Atlantis Press, 2018. http://dx.doi.org/10.2991/anit-17.2018.30.
Full textValin, Pierre, and David Boily. "Truncated Dempster-Shafer optimization and benchmarking." In AeroSense 2000, edited by Belur V. Dasarathy. SPIE, 2000. http://dx.doi.org/10.1117/12.381636.
Full textDoerr, Carola, Furong Ye, Naama Horesh, Hao Wang, Ofer M. Shir, and Thomas Bäck. "Benchmarking discrete optimization heuristics with IOHprofiler." In GECCO '19: Genetic and Evolutionary Computation Conference. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3319619.3326810.
Full textEftimov, Tome, and Peter Korošec. "Robust benchmarking for multi-objective optimization." In GECCO '21: Genetic and Evolutionary Computation Conference. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3449726.3463299.
Full textItoko, Toshinari, and Rudy Raymond. "Sampling Strategy Optimization for Randomized Benchmarking." In 2021 IEEE International Conference on Quantum Computing and Engineering (QCE). IEEE, 2021. http://dx.doi.org/10.1109/qce52317.2021.00036.
Full textPošik, Petr. "BBOB-benchmarking the DIRECT global optimization algorithm." In the 11th annual conference companion. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1570256.1570323.
Full textReports on the topic "Optimization Benchmarking"
Dolan, E. D., and J. J. More. Benchmarking optimization software with COPS. Office of Scientific and Technical Information (OSTI), January 2001. http://dx.doi.org/10.2172/775270.
Full textDolan, E. D., J. J. More, and T. S. Munson. Benchmarking optimization software with COPS 3.0. Office of Scientific and Technical Information (OSTI), May 2004. http://dx.doi.org/10.2172/834714.
Full textParekh, Ojas D., Jeremy D. Wendt, Luke Shulenburger, Andrew J. Landahl, Jonathan Edward Moussa, and John B. Aidun. Benchmarking Adiabatic Quantum Optimization for Complex Network Analysis. Office of Scientific and Technical Information (OSTI), April 2015. http://dx.doi.org/10.2172/1459086.
Full text