To see the other types of publications on this topic, follow the link: Optimization algorithms.

Journal articles on the topic 'Optimization algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Optimization algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Celik, Yuksel, and Erkan Ulker. "An Improved Marriage in Honey Bees Optimization Algorithm for Single Objective Unconstrained Optimization." Scientific World Journal 2013 (2013): 1–11. http://dx.doi.org/10.1155/2013/370172.

Full text
Abstract:
Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm’s performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms.
APA, Harvard, Vancouver, ISO, and other styles
2

Luan, Yuxuan, Junjiang He, Jingmin Yang, Xiaolong Lan, and Geying Yang. "Uniformity-Comprehensive Multiobjective Optimization Evolutionary Algorithm Based on Machine Learning." International Journal of Intelligent Systems 2023 (November 10, 2023): 1–21. http://dx.doi.org/10.1155/2023/1666735.

Full text
Abstract:
When solving real-world optimization problems, the uniformity of Pareto fronts is an essential strategy in multiobjective optimization problems (MOPs). However, it is a common challenge for many existing multiobjective optimization algorithms due to the skewed distribution of solutions and biases towards specific objective functions. This paper proposes a uniformity-comprehensive multiobjective optimization evolutionary algorithm based on machine learning to address this limitation. Our algorithm utilizes uniform initialization and self-organizing map (SOM) to enhance population diversity and uniformity. We track the IGD value and use K-means and CNN refinement with crossover and mutation techniques during evolutionary stages. Our algorithm’s uniformity and objective function balance superiority were verified through comparative analysis with 13 other algorithms, including eight traditional multiobjective optimization algorithms, three machine learning-based enhanced multiobjective optimization algorithms, and two algorithms with objective initialization improvements. Based on these comprehensive experiments, it has been proven that our algorithm outperforms other existing algorithms in these areas.
APA, Harvard, Vancouver, ISO, and other styles
3

Wen, Xiaodong, Xiangdong Liu, Cunhui Yu, Haoning Gao, Jing Wang, Yongji Liang, Jiangli Yu, and Yan Bai. "IOOA: A multi-strategy fusion improved Osprey Optimization Algorithm for global optimization." Electronic Research Archive 32, no. 3 (2024): 2033–74. http://dx.doi.org/10.3934/era.2024093.

Full text
Abstract:
<abstract><p>With the widespread application of metaheuristic algorithms in engineering and scientific research, finding algorithms with efficient global search capabilities and precise local search performance has become a hot topic in research. The osprey optimization algorithm (OOA) was first proposed in 2023, characterized by its simple structure and strong optimization capability. However, practical tests have revealed that the OOA algorithm inevitably encounters common issues faced by metaheuristic algorithms, such as the tendency to fall into local optima and reduced population diversity in the later stages of the algorithm's iterations. To address these issues, a multi-strategy fusion improved osprey optimization algorithm is proposed (IOOA). First, the characteristics of various chaotic mappings were thoroughly explored, and the adoption of Circle chaotic mapping to replace pseudo-random numbers for population initialization improvement was proposed, increasing initial population diversity and improving the quality of initial solutions. Second, a dynamically adjustable elite guidance mechanism was proposed to dynamically adjust the position updating method according to different stages of the algorithm's iteration, ensuring the algorithm maintains good global search capabilities while significantly increasing the convergence speed of the algorithm. Lastly, a dynamic chaotic weight factor was designed and applied in the development stage of the original algorithm to enhance the algorithm's local search capability and improve the convergence accuracy of the algorithm. To fully verify the effectiveness and practical engineering applicability of the IOOA algorithm, simulation experiments were conducted using 21 benchmark test functions and the CEC-2022 benchmark functions, and the IOOA algorithm was applied to the LSTM power load forecasting problem as well as two engineering design problems. The experimental results show that the IOOA algorithm possesses outstanding global optimization performance in handling complex optimization problems and broad applicability in practical engineering applications.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
4

Kim, Minsu, Areum Han, Jaewon Lee, Sunghyun Cho, Il Moon, and Jonggeol Na. "Comparison of Derivative-Free Optimization: Energy Optimization of Steam Methane Reforming Process." International Journal of Energy Research 2023 (June 3, 2023): 1–20. http://dx.doi.org/10.1155/2023/8868540.

Full text
Abstract:
In modern chemical engineering, various derivative-free optimization (DFO) studies have been conducted to identify operating conditions that maximize energy efficiency for efficient operation of processes. Although DFO algorithm selection is an essential task that leads to successful designs, it is a nonintuitive task because of the uncertain performance of the algorithms. In particular, when the system evaluation cost or computational load is high (e.g., density functional theory and computational fluid dynamics), selecting an algorithm that quickly converges to the near-global optimum at the early stage of optimization is more important. In this study, we compare the optimization performance in the early stage of 12 algorithms. The performance of deterministic global search algorithms, global model-based search algorithms, metaheuristic algorithms, and Bayesian optimization is compared by applying benchmark problems and analyzed based on the problem types and number of variables. Furthermore, we apply all algorithms to the energy process optimization that maximizes the thermal efficiency of the steam methane reforming (SMR) process for hydrogen production. In this application, we have identified a hidden constraint based on real-world operations, and we are addressing it by using a penalty function. Bayesian optimizations explore the design space most efficiently by training infeasible regions. As a result, we have observed a substantial improvement in thermal efficiency of 12.9% compared to the base case and 7% improvement when compared to the lowest performing algorithm.
APA, Harvard, Vancouver, ISO, and other styles
5

Priyadarshini, Ishaani. "Dendritic Growth Optimization: A Novel Nature-Inspired Algorithm for Real-World Optimization Problems." Biomimetics 9, no. 3 (February 21, 2024): 130. http://dx.doi.org/10.3390/biomimetics9030130.

Full text
Abstract:
In numerous scientific disciplines and practical applications, addressing optimization challenges is a common imperative. Nature-inspired optimization algorithms represent a highly valuable and pragmatic approach to tackling these complexities. This paper introduces Dendritic Growth Optimization (DGO), a novel algorithm inspired by natural branching patterns. DGO offers a novel solution for intricate optimization problems and demonstrates its efficiency in exploring diverse solution spaces. The algorithm has been extensively tested with a suite of machine learning algorithms, deep learning algorithms, and metaheuristic algorithms, and the results, both before and after optimization, unequivocally support the proposed algorithm’s feasibility, effectiveness, and generalizability. Through empirical validation using established datasets like diabetes and breast cancer, the algorithm consistently enhances model performance across various domains. Beyond its working and experimental analysis, DGO’s wide-ranging applications in machine learning, logistics, and engineering for solving real-world problems have been highlighted. The study also considers the challenges and practical implications of implementing DGO in multiple scenarios. As optimization remains crucial in research and industry, DGO emerges as a promising avenue for innovation and problem solving.
APA, Harvard, Vancouver, ISO, and other styles
6

RAO, Xiong, Run DU, Wenming CHENG, and Yi YANG. "Modified proportional topology optimization algorithm for multiple optimization problems." Mechanics 30, no. 1 (February 23, 2024): 36–45. http://dx.doi.org/10.5755/j02.mech.34367.

Full text
Abstract:
Three modified proportional topology optimization (MPTO) algorithms are presented in this paper, which are named MPTOc, MPTOs and MPTOm, respectively. MPTOc aims to address the minimum compliance problem with volume constraint, MPTOs aims to solve the minimum volume fraction problem under stress constraint, and MPTOm aims to tackle the minimum volume fraction problem under compliance and stress constraints. In order to get rid of the shortcomings of the original proportional topology optimization (PTO) algorithm and improve the comprehensive performance of the PTO algorithm, the proposed algorithms modify the material interpolation scheme and introduce the Heaviside threshold function based on the PTO algorithm. To confirm the effectiveness and superiority of the presented algorithms, multiple optimization problems for the classical MBB beam are solved, and the original PTO algorithm is compared with the new algorithms. Numerical examples show that MPTOc, MPTOs and MPTOm enjoy distinct advantages over the PTO algorithm in the matter of convergence efficiency and the ability to obtain distinct topology structure without redundancy. Moreover, MPTOc has the fastest convergence speed among these algorithms and can acquire the smallest (best) compliance value. In addition, the new algorithms are also superior to PTO concerning suppressing gray-scale elements.
APA, Harvard, Vancouver, ISO, and other styles
7

Gireesha. B, Mr, and . "A Literature Survey on Artificial Swarm Intelligence based Optimization Techniques." International Journal of Engineering & Technology 7, no. 4.5 (September 22, 2018): 455. http://dx.doi.org/10.14419/ijet.v7i4.5.20205.

Full text
Abstract:
From few decades’ optimizations techniques plays a key role in engineering and technological field applications. They are known for their behaviour pattern for solving modern engineering problems. Among various optimization techniques, heuristic and meta-heuristic algorithms proved to be efficient. In this paper, an effort is made to address techniques that are commonly used in engineering applications. This paper presents a basic overview of such optimization algorithms namely Artificial Bee Colony (ABC) Algorithm, Ant Colony Optimization (ACO) Algorithm, Fire-fly Algorithm (FFA) and Particle Swarm Optimization (PSO) is presented and also the most suitable fitness functions and its numerical expressions have discussed.
APA, Harvard, Vancouver, ISO, and other styles
8

Acherjee, Bappa, Debanjan Maity, and Arunanshu S. Kuar. "Ultrasonic Machining Process Optimization by Cuckoo Search and Chicken Swarm Optimization Algorithms." International Journal of Applied Metaheuristic Computing 11, no. 2 (April 2020): 1–26. http://dx.doi.org/10.4018/ijamc.2020040101.

Full text
Abstract:
The ultrasonic machining (USM) process has been analyzed in the present study to obtain the desired process responses by optimizing machining parameters using cuckoo search (CS) and chicken swarm optimization (CSO), two powerful nature-inspired, population and swarm-intelligence-based metaheuristic algorithms. The CS and CSO algorithms have been compared with other non-conventional optimization techniques in terms of optimal results, convergence, accuracy, and computational time. It is found that CS and CSO algorithms predict superior single and multi-objective optimization results than gravitational search algorithms (GSAs), genetic algorithms (GAs), particle swarm optimization (PSO) algorithms, ant colony optimization (ACO) algorithms and artificial bee colony (ABC) algorithms, and gives exactly the same results as predicted by the fireworks algorithm (FWA). The CS algorithm outperforms all other algorithms namely CSO, FWA, GSA, GA, PSO, ACO, and ABC algorithms in terms of mean computational time, whereas, the CSO algorithm outperforms all other algorithms except for the CS and GSA algorithms.
APA, Harvard, Vancouver, ISO, and other styles
9

Alfarhisi, Zikrie Pramudia, Hadi Suyono, and Fakhriy Hario Partiansyah. "4G LTE Network Coverage Optimization Using Metaheuristic Approach." International Journal of Computer Applications Technology and Researc 10, no. 01 (January 1, 2021): 010–13. http://dx.doi.org/10.7753/ijcatr1001.1003.

Full text
Abstract:
The main focus of this paper is to optimize the coverage of each 4G LTE network cell within the service area. There are many algorithms can be implemented to determine the optimal 4G LTE coverage area including the deterministic and heuristic approaches. The deterministic approach could solve accurately the optimization problem but need more resources and time consuming to determine the convergence parameters. Therefore, the heuristic approaches were introduced to improve the deterministic approach drawback. The methods used are the Differential Evolution Algorithm (DEA) and Adaptive Mutation Genetic Algorithm (AMGA), which are categorized as metaheuristic approach. The DEA and AMGA algorithms have been widely used to solve combinatorial problems, including for solving the network optimizations. In the network optimization, coverage is strongly related to 2 objectives, which are reducing the black spot area and decreasing the overlapping coverage areas. Coverage overlap is a condition when some cell sites in an area overlap. It implies in the occurrence of hand off and an inefficient network management. This research aims to obtain an optimal 4G LTE network coverage and reduce the overlapping coverage areas based on effective e-Node B arrangements by using the DEA and AMGA algorithms. The simulations results showed that the DEA algorithm’s coverage effectiveness was 23,4%, and the AMGA Algorithm’s was 16,32%.
APA, Harvard, Vancouver, ISO, and other styles
10

Si, Binghui, Feng Liu, and Yanxia Li. "Metamodel-Based Hyperparameter Optimization of Optimization Algorithms in Building Energy Optimization." Buildings 13, no. 1 (January 9, 2023): 167. http://dx.doi.org/10.3390/buildings13010167.

Full text
Abstract:
Building energy optimization (BEO) is a promising technique to achieve energy efficient designs. The efficacy of optimization algorithms is imperative for the BEO technique and is significantly dependent on the algorithm hyperparameters. Currently, studies focusing on algorithm hyperparameters are scarce, and common agreement on how to set their values, especially for BEO problems, is still lacking. This study proposes a metamodel-based methodology for hyperparameter optimization of optimization algorithms applied in BEO. The aim is to maximize the algorithmic efficacy and avoid the failure of the BEO technique because of improper algorithm hyperparameter settings. The method consists of three consecutive steps: constructing the specific BEO problem, developing an ANN-trained metamodel of the problem, and optimizing algorithm hyperparameters with nondominated sorting genetic algorithm II (NSGA-II). To verify the validity, 15 benchmark BEO problems with different properties, i.e., five building models and three design variable categories, were constructed for numerical experiments. For each problem, the hyperparameters of four commonly used algorithms, i.e., the genetic algorithm (GA), the particle swarm optimization (PSO) algorithm, simulated annealing (SA), and the multi-objective genetic algorithm (MOGA), were optimized. Results demonstrated that the MOGA benefited the most from hyperparameter optimization in terms of the quality of the obtained optimum, while PSO benefited the most in terms of the computing time.
APA, Harvard, Vancouver, ISO, and other styles
11

Arıcı, FerdaNur, and Ersin Kaya. "Comparison of Meta-heuristic Algorithms on Benchmark Functions." Academic Perspective Procedia 2, no. 3 (November 22, 2019): 508–17. http://dx.doi.org/10.33793/acperpro.02.03.41.

Full text
Abstract:
Optimization is a process to search the most suitable solution for a problem within an acceptable time interval. The algorithms that solve the optimization problems are called as optimization algorithms. In the literature, there are many optimization algorithms with different characteristics. The optimization algorithms can exhibit different behaviors depending on the size, characteristics and complexity of the optimization problem. In this study, six well-known population based optimization algorithms (artificial algae algorithm - AAA, artificial bee colony algorithm - ABC, differential evolution algorithm - DE, genetic algorithm - GA, gravitational search algorithm - GSA and particle swarm optimization - PSO) were used. These six algorithms were performed on the CEC&amp;rsquo;17 test functions. According to the experimental results, the algorithms were compared and performances of the algorithms were evaluated.
APA, Harvard, Vancouver, ISO, and other styles
12

Cui-Cui Cai, Cui-Cui Cai, Mao-Sheng Fu Cui-Cui Cai, Xian-Meng Meng Mao-Sheng Fu, Qi-Jian Wang Xian-Meng Meng, and Yue-Qin Wang Qi-Jian Wang. "Modified Harris Hawks Optimization Algorithm with Multi-strategy for Global Optimization Problem." 電腦學刊 34, no. 6 (December 2023): 091–105. http://dx.doi.org/10.53106/199115992023123406007.

Full text
Abstract:
<p>As a novel metaheuristic algorithm, the Harris Hawks Optimization (HHO) algorithm has excellent search capability. Similar to other metaheuristic algorithms, the HHO algorithm has low convergence accuracy and easily traps in local optimal when dealing with complex optimization problems. A modified Harris Hawks optimization (MHHO) algorithm with multiple strategies is presented to overcome this defect. First, chaotic mapping is used for population initialization to select an appropriate initiation position. Then, a novel nonlinear escape energy update strategy is presented to control the transformation of the algorithm phase. Finally, a nonlinear control strategy is implemented to further improve the algorithm&rsquo;s efficiency. The experimental results on benchmark functions indicate that the performance of the MHHO algorithm outperforms other algorithms. In addition, to validate the performance of the MHHO algorithm in solving engineering problems, the proposed algorithm is applied to an indoor visible light positioning system, and the results show that the high precision positioning of the MHHO algorithm is obtained.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
13

Zou, Tingting, and Changyu Wang. "Adaptive Relative Reflection Harris Hawks Optimization for Global Optimization." Mathematics 10, no. 7 (April 2, 2022): 1145. http://dx.doi.org/10.3390/math10071145.

Full text
Abstract:
The Harris Hawks optimization (HHO) is a population-based metaheuristic algorithm; however, it has low diversity and premature convergence in certain problems. This paper proposes an adaptive relative reflection HHO (ARHHO), which increases the diversity of standard HHO, alleviates the problem of stagnation of local optimal solutions, and improves the search accuracy of the algorithm. The main features of the algorithm define nonlinear escape energy and adaptive weights and combine adaptive relative reflection with the HHO algorithm. Furthermore, we prove the computational complexity of the ARHHO algorithm. Finally, the performance of our algorithm is evaluated by comparison with other well-known metaheuristic algorithms on 23 benchmark problems. Experimental results show that our algorithms performs better than the compared algorithms on most of the benchmark functions.
APA, Harvard, Vancouver, ISO, and other styles
14

Hao, Shangqing, Xuewen Wang, Jiacheng Xie, and Zhaojian Yang. "Rigid framework section parameter optimization and optimization algorithm research." Transactions of the Canadian Society for Mechanical Engineering 43, no. 3 (September 1, 2019): 398–404. http://dx.doi.org/10.1139/tcsme-2018-0085.

Full text
Abstract:
This article compares the optimization algorithms included with ANSYS Software for optimizing the dimensions of a large steel framework to minimize weight while maintaining stiffness. A finite element model of the structure was prepared, and the section parameters were optimized using the sub-problem and first-order algorithms. These reduce the weight of the structure by 33.8%. The sub-problem algorithm and the first-order algorithm are explained from the rationale, iteration method, and convergence criterion. According to the optimized result, these two algorithms were compared. The results show that the sub-problem algorithm is faster and can control the overall design space, and the first-order algorithm is more precise.
APA, Harvard, Vancouver, ISO, and other styles
15

Liang, Jianhui, Lifang Wang, and Miao Ma. "An Adaptive Dual-Population Collaborative Chicken Swarm Optimization Algorithm for High-Dimensional Optimization." Biomimetics 8, no. 2 (May 19, 2023): 210. http://dx.doi.org/10.3390/biomimetics8020210.

Full text
Abstract:
With the development of science and technology, many optimization problems in real life have developed into high-dimensional optimization problems. The meta-heuristic optimization algorithm is regarded as an effective method to solve high-dimensional optimization problems. However, considering that traditional meta-heuristic optimization algorithms generally have problems such as low solution accuracy and slow convergence speed when solving high-dimensional optimization problems, an adaptive dual-population collaborative chicken swarm optimization (ADPCCSO) algorithm is proposed in this paper, which provides a new idea for solving high-dimensional optimization problems. First, in order to balance the algorithm’s search abilities in terms of breadth and depth, the value of parameter G is given by an adaptive dynamic adjustment method. Second, in this paper, a foraging-behavior-improvement strategy is utilized to improve the algorithm’s solution accuracy and depth-optimization ability. Third, the artificial fish swarm algorithm (AFSA) is introduced to construct a dual-population collaborative optimization strategy based on chicken swarms and artificial fish swarms, so as to improve the algorithm’s ability to jump out of local extrema. The simulation experiments on the 17 benchmark functions preliminarily show that the ADPCCSO algorithm is superior to some swarm-intelligence algorithms such as the artificial fish swarm algorithm (AFSA), the artificial bee colony (ABC) algorithm, and the particle swarm optimization (PSO) algorithm in terms of solution accuracy and convergence performance. In addition, the APDCCSO algorithm is also utilized in the parameter estimation problem of the Richards model to further verify its performance.
APA, Harvard, Vancouver, ISO, and other styles
16

Manikandan, K., and B. Sudhakar. "Hybrid Optimization Algorithm for Multi-level Image Thresholding Using Salp Swarm Optimization Algorithm and Ant Colony Optimization." International Journal of Electrical and Electronic Engineering & Telecommunications 12, no. 6 (2023): 411–23. http://dx.doi.org/10.18178/ijeetc.12.6.411-423.

Full text
Abstract:
The process of identifying optimal threshold for multi-level thresholding in image segmentation is a challenging process. An efficient optimization algorithm is required to find the optimal threshold and various nature inspired; evolutionary optimization algorithms are presented by the research community. However, to improve the performance in finding optimal threshold value and minimize the error, reduces the searching time a hybrid optimization algorithm is presented in this research work using salp swarm optimization and ant colony optimization algorithm. The ant colony optimization algorithm is used to enhance the exploration and exploitation characteristics of salp swarm optimization in finding optimal threshold for the given image. Experimentation using standard images validates the proposed model performance in comparison with traditional optimization algorithms like moth flame optimization, whale optimization algorithm, grey wolf optimization, artificial bee colony and bee foraging optimization algorithms. Proposed hybrid optimization outperformed in all parameters compared to traditional optimization algorithms and provides better optimal thresholds for the given input image.
APA, Harvard, Vancouver, ISO, and other styles
17

Okindo, Geoffrey, Prof George Kamucha, and Dr Nicholas Oyie. "Dynamic Optimization in 5G Network Slices: A Comparative Study of Whale Optimization, Particle Swarm Optimization, and Genetic Algorithm." International Journal of Electrical and Electronics Research 12, no. 3 (July 30, 2024): 849–62. http://dx.doi.org/10.37391/ijeer.120316.

Full text
Abstract:
This study presents a comprehensive framework for optimizing 5G network slices using metaheuristic algorithms, focusing on Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low-Latency Communications (URLLC), and massive Machine Type Communications (mMTC) scenarios. The initial setup involves a MATLAB-based 5G New Radio (NR) Physical Downlink Shared Channel (PDSCH) simulation and OpenAir-Interface (OAI) 5G network testbed, utilizing Ubuntu 22.04 Long Term Support (LTS), MicroStack, Open-Source MANO (OSM), and k3OS to create a versatile testing environment. Key network parameters are identified for optimization, including power control settings, signal-to-noise ratio targets, and resource block allocation, to address the unique requirements of different 5G use cases. Metaheuristic algorithms, specifically the Whale Optimization Algorithm (WOA), Particle Swarm Optimization (PSO), and Genetic Algorithm (GA), are employed to optimize these parameters. The algorithms are assessed based on their ability to enhance throughput, reduce latency, and minimize jitter within the network slices and the MATLAB simulation model. Each algorithm’s performance is evaluated through iterative testing, with improvements measured against established pre-optimization benchmarks. The results demonstrate significant enhancements in network performance post-optimization. For eMBB, the GA shows the most substantial increase in throughput, while PSO is most effective in reducing latency for URLLC applications. In mMTC scenarios, GA achieves the most notable reduction in jitter, illustrating the potential of metaheuristic algorithms in fine-tuning 5G networks to meet diverse service requirements. The study concludes that the strategic application of these algorithms can significantly improve the efficiency and reliability of 5G network slices, offering a scalable approach to managing the complex dynamics of next-generation wireless networks.
APA, Harvard, Vancouver, ISO, and other styles
18

Belazi, Akram, Héctor Migallón, Daniel Gónzalez-Sánchez, Jorge Gónzalez-García, Antonio Jimeno-Morenilla, and José-Luis Sánchez-Romero. "Enhanced Parallel Sine Cosine Algorithm for Constrained and Unconstrained Optimization." Mathematics 10, no. 7 (April 3, 2022): 1166. http://dx.doi.org/10.3390/math10071166.

Full text
Abstract:
The sine cosine algorithm’s main idea is the sine and cosine-based vacillation outwards or towards the best solution. The first main contribution of this paper proposes an enhanced version of the SCA algorithm called as ESCA algorithm. The supremacy of the proposed algorithm over a set of state-of-the-art algorithms in terms of solution accuracy and convergence speed will be demonstrated by experimental tests. When these algorithms are transferred to the business sector, they must meet time requirements dependent on the industrial process. If these temporal requirements are not met, an efficient solution is to speed them up by designing parallel algorithms. The second major contribution of this work is the design of several parallel algorithms for efficiently exploiting current multicore processor architectures. First, one-level synchronous and asynchronous parallel ESCA algorithms are designed. They have two favors; retain the proposed algorithm’s behavior and provide excellent parallel performance by combining coarse-grained parallelism with fine-grained parallelism. Moreover, the parallel scalability of the proposed algorithms is further improved by employing a two-level parallel strategy. Indeed, the experimental results suggest that the one-level parallel ESCA algorithms reduce the computing time, on average, by 87.4% and 90.8%, respectively, using 12 physical processing cores. The two-level parallel algorithms provide extra reductions of the computing time by 91.4%, 93.1%, and 94.5% with 16, 20, and 24 processing cores, including physical and logical cores. Comparison analysis is carried out on 30 unconstrained benchmark functions and three challenging engineering design problems. The experimental outcomes show that the proposed ESCA algorithm behaves outstandingly well in terms of exploration and exploitation behaviors, local optima avoidance, and convergence speed toward the optimum. The overall performance of the proposed algorithm is statistically validated using three non-parametric statistical tests, namely Friedman, Friedman aligned, and Quade tests.
APA, Harvard, Vancouver, ISO, and other styles
19

Zhang, Chuang, Yue-Han Pei, Xiao-Xue Wang, Hong-Yu Hou, and Li-Hua Fu. "Symmetric cross-entropy multi-threshold color image segmentation based on improved pelican optimization algorithm." PLOS ONE 18, no. 6 (June 29, 2023): e0287573. http://dx.doi.org/10.1371/journal.pone.0287573.

Full text
Abstract:
To address the problems of low accuracy and slow convergence of traditional multilevel image segmentation methods, a symmetric cross-entropy multilevel thresholding image segmentation method (MSIPOA) with multi-strategy improved pelican optimization algorithm is proposed for global optimization and image segmentation tasks. First, Sine chaotic mapping is used to improve the quality and distribution uniformity of the initial population. A spiral search mechanism incorporating a sine cosine optimization algorithm improves the algorithm’s search diversity, local pioneering ability, and convergence accuracy. A levy flight strategy further improves the algorithm’s ability to jump out of local minima. In this paper, 12 benchmark test functions and 8 other newer swarm intelligence algorithms are compared in terms of convergence speed and convergence accuracy to evaluate the performance of the MSIPOA algorithm. By non-parametric statistical analysis, MSIPOA shows a greater superiority over other optimization algorithms. The MSIPOA algorithm is then experimented with symmetric cross-entropy multilevel threshold image segmentation, and eight images from BSDS300 are selected as the test set to evaluate MSIPOA. According to different performance metrics and Fridman test, MSIPOA algorithm outperforms similar algorithms in global optimization and image segmentation, and the symmetric cross entropy of MSIPOA algorithm for multilevel thresholding image segmentation method can be effectively applied to multilevel thresholding image segmentation tasks.
APA, Harvard, Vancouver, ISO, and other styles
20

López, Luis Fernando de Mingo, Francisco Serradilla García, José Eugenio Naranjo Hernández, and Nuria Gómez Blas. "Speed Proportional Integrative Derivative Controller: Optimization Functions in Metaheuristic Algorithms." Journal of Advanced Transportation 2021 (November 3, 2021): 1–12. http://dx.doi.org/10.1155/2021/5538296.

Full text
Abstract:
Recent advancements in computer science include some optimization models that have been developed and used in real applications. Some metaheuristic search/optimization algorithms have been tested to obtain optimal solutions to speed controller applications in self-driving cars. Some metaheuristic algorithms are based on social behaviour, resulting in several search models, functions, and parameters, and thus algorithm-specific strengths and weaknesses. The present paper proposes a fitness function on the basis of the mathematical description of proportional integrative derivate controllers showing that mean square error is not always the best measure when looking for a solution to the problem. The fitness developed in this paper contains features and equations from the mathematical background of proportional integrative derivative controllers to calculate the best performance of the system. Such results are applied to quantitatively evaluate the performance of twenty-one optimization algorithms. Furthermore, improved versions of the fitness function are considered, in order to investigate which aspects are enhanced by applying the optimization algorithms. Results show that the right fitness function is a key point to get a good performance, regardless of the chosen algorithm. The aim of this paper is to present a novel objective function to carry out optimizations of the gains of a PID controller, using several computational intelligence techniques to perform the optimizations. The result of these optimizations will demonstrate the improved efficiency of the selected control schema.
APA, Harvard, Vancouver, ISO, and other styles
21

Pan, Jeng-Shyang, Zhen Zhang, Shu-Chuan Chu, Zne-Jung Lee, and Wei Li. "Application of Diversity-Maintaining Adaptive Rafflesia Optimization Algorithm to Engineering Optimisation Problems." Symmetry 15, no. 11 (November 16, 2023): 2077. http://dx.doi.org/10.3390/sym15112077.

Full text
Abstract:
The Diversity-Maintained Adaptive Rafflesia Optimization Algorithm represents an enhanced version of the original Rafflesia Optimization Algorithm. The latter draws inspiration from the unique characteristics displayed by the Rafflesia during its growth, simulating the entire lifecycle from blooming to seed dispersion. The incorporation of the Adaptive Weight Adjustment Strategy and the Diversity Maintenance Strategy assists the algorithm in averting premature convergence to local optima, subsequently bolstering its global search capabilities. When tested on the CEC2013 benchmark functions under a dimension of 30, the new algorithm was compared with ten optimization algorithms, including commonly used classical algorithms, such as PSO, DE, CSO, SCA, and the newly introduced ROA. Evaluation metrics included mean and variance, and the new algorithm outperformed on a majority of the test functions. Concurrently, the new algorithm was applied to six real-world engineering problems: tensile/compressive spring design, pressure vessel design, three-bar truss design, welded beam design, reducer design, and gear system design. In these comparative optimizations against other mainstream algorithms, the objective function’s mean value optimized by the new algorithm consistently surpassed that of other algorithms across all six engineering challenges. Such experimental outcomes validate the efficiency and reliability of the Diversity-Maintained Adaptive Rafflesia Optimization Algorithm in tackling optimization challenges. The Diversity- Maintained Adaptive Rafflesia Optimization Algorithm is capable of tuning the parameter values for the optimization of symmetry and asymmetry functions. As part of our future research endeavors, we aim to deploy this algorithm on an even broader array of diverse and distinct optimization problems, such as the arrangement of wireless sensor nodes, further solidifying its widespread applicability and efficacy.
APA, Harvard, Vancouver, ISO, and other styles
22

ARICI, Ferda Nur, and Ersin KAYA. "Hierarchical Approaches to Solve Optimization Problems." Academic Platform Journal of Engineering and Smart Systems 10, no. 3 (September 30, 2022): 124–39. http://dx.doi.org/10.21541/apjess.1065912.

Full text
Abstract:
Optimization is the operation of finding the most appropriate solution for a particular problem or set of problems. In the literature, there are many population-based optimization algorithms for solving optimization problems. Each of these algorithms has different characteristics. Although optimization algorithms give optimum results on some problems, they become insufficient to give optimum results as the problem gets harder and more complex. Many studies have been carried out to improve optimization algorithms to overcome these difficulties in recent years. In this study, six well-known population-based optimization algorithms (artificial algae algorithm - AAA, artificial bee colony algorithm - ABC, differential evolution algorithm - DE, genetic algorithm - GA, gravitational search algorithm - GSA, and particle swarm optimization - PSO) were used. Each of these algorithms has its own advantages and disadvantages. These population-based six algorithms were tested on CEC’17 test functions and their performances were examined and so the characteristics of the algorithms were determined. Based on these results, hierarchical approaches have been proposed in order to combine the advantages of algorithms and achieve better results. The hierarchical approach refers to the successful operation of algorithms. In this study, eight approaches were proposed, and performance evaluations of these structures were made on CEC’17 test functions. When the experimental results are examined, it is concluded that some hierarchical approaches can be applied, and some hierarchical approaches surpass the base states of the algorithms.
APA, Harvard, Vancouver, ISO, and other styles
23

Elhossini, Ahmed, Shawki Areibi, and Robert Dony. "Strength Pareto Particle Swarm Optimization and Hybrid EA-PSO for Multi-Objective Optimization." Evolutionary Computation 18, no. 1 (March 2010): 127–56. http://dx.doi.org/10.1162/evco.2010.18.1.18105.

Full text
Abstract:
This paper proposes an efficient particle swarm optimization (PSO) technique that can handle multi-objective optimization problems. It is based on the strength Pareto approach originally used in evolutionary algorithms (EA). The proposed modified particle swarm algorithm is used to build three hybrid EA-PSO algorithms to solve different multi-objective optimization problems. This algorithm and its hybrid forms are tested using seven benchmarks from the literature and the results are compared to the strength Pareto evolutionary algorithm (SPEA2) and a competitive multi-objective PSO using several metrics. The proposed algorithm shows a slower convergence, compared to the other algorithms, but requires less CPU time. Combining PSO and evolutionary algorithms leads to superior hybrid algorithms that outperform SPEA2, the competitive multi-objective PSO (MO-PSO), and the proposed strength Pareto PSO based on different metrics.
APA, Harvard, Vancouver, ISO, and other styles
24

Kaya, Ebubekir, Ceren Baştemur Kaya, Emre Bendeş, Sema Atasever, Başak Öztürk, and Bilgin Yazlık. "Training of Feed-Forward Neural Networks by Using Optimization Algorithms Based on Swarm-Intelligent for Maximum Power Point Tracking." Biomimetics 8, no. 5 (September 1, 2023): 402. http://dx.doi.org/10.3390/biomimetics8050402.

Full text
Abstract:
One of the most used artificial intelligence techniques for maximum power point tracking is artificial neural networks. In order to achieve successful results in maximum power point tracking, the training process of artificial neural networks is important. Metaheuristic algorithms are used extensively in the literature for neural network training. An important group of metaheuristic algorithms is swarm-intelligent-based optimization algorithms. In this study, feed-forward neural network training is carried out for maximum power point tracking by using 13 swarm-intelligent-based optimization algorithms. These algorithms are artificial bee colony, butterfly optimization, cuckoo search, chicken swarm optimization, dragonfly algorithm, firefly algorithm, grasshopper optimization algorithm, krill herd algorithm, particle swarm optimization, salp swarm algorithm, selfish herd optimizer, tunicate swarm algorithm, and tuna swarm optimization. Mean squared error is used as the error metric, and the performances of the algorithms in different network structures are evaluated. Considering the results, a success ranking score is obtained for each algorithm. The three most successful algorithms in both training and testing processes are the firefly algorithm, selfish herd optimizer, and grasshopper optimization algorithm, respectively. The training error values obtained with these algorithms are 4.5 × 10−4, 1.6 × 10−3, and 2.3 × 10−3, respectively. The test error values are 4.6 × 10−4, 1.6 × 10−3, and 2.4 × 10−3, respectively. With these algorithms, effective results have been achieved in a low number of evaluations. In addition to these three algorithms, other algorithms have also achieved mostly acceptable results. This shows that the related algorithms are generally successful ANFIS training algorithms for maximum power point tracking.
APA, Harvard, Vancouver, ISO, and other styles
25

Pan, Anqi, Hongjun Tian, Lei Wang, and Qidi Wu. "A Decomposition-Based Unified Evolutionary Algorithm for Many-Objective Problems Using Particle Swarm Optimization." Mathematical Problems in Engineering 2016 (2016): 1–15. http://dx.doi.org/10.1155/2016/6761545.

Full text
Abstract:
Evolutionary algorithms have proved to be efficient approaches in pursuing optimum solutions of multiobjective optimization problems with the number of objectives equal to or less than three. However, the searching performance degenerates in high-dimensional objective optimizations. In this paper we propose an algorithm for many-objective optimization with particle swarm optimization as the underlying metaheuristic technique. In the proposed algorithm, the objectives are decomposed and reconstructed using discrete decoupling strategy, and the subgroup procedures are integrated into unified coevolution strategy. The proposed algorithm consists of inner and outer evolutionary processes, together with adaptive factor μ, to maintain convergence and diversity. At the same time, a designed repository reconstruction strategy and improved leader selection techniques of MOPSO are introduced. The comparative experimental results prove that the proposed UMOPSO-D outperforms the other six algorithms, including four common used algorithms and two newly proposed algorithms based on decomposition, especially in high-dimensional targets.
APA, Harvard, Vancouver, ISO, and other styles
26

Coufal, Petr, Štěpán Hubálovský, Marie Hubálovská, and Zoltan Balogh. "Snow Leopard Optimization Algorithm: A New Nature-Based Optimization Algorithm for Solving Optimization Problems." Mathematics 9, no. 21 (November 8, 2021): 2832. http://dx.doi.org/10.3390/math9212832.

Full text
Abstract:
Numerous optimization problems have been defined in different disciplines of science that must be optimized using effective techniques. Optimization algorithms are an effective and widely used method of solving optimization problems that are able to provide suitable solutions for optimization problems. In this paper, a new nature-based optimization algorithm called Snow Leopard Optimization Algorithm (SLOA) is designed that mimics the natural behaviors of snow leopards. SLOA is simulated in four phases including travel routes, hunting, reproduction, and mortality. The different phases of the proposed algorithm are described and then the mathematical modeling of the SLOA is presented in order to implement it on different optimization problems. A standard set of objective functions, including twenty-three functions, is used to evaluate the ability of the proposed algorithm to optimize and provide appropriate solutions for optimization problems. Also, the optimization results obtained from the proposed SLOA are compared with eight other well-known optimization algorithms. The optimization results show that the proposed SLOA has a high ability to solve various optimization problems. Also, the analysis and comparison of the optimization results obtained from the SLOA with the other eight algorithms shows that the SLOA is able to provide more appropriate quasi-optimal solutions and closer to the global optimal, and with better performance, it is much more competitive than similar algorithms.
APA, Harvard, Vancouver, ISO, and other styles
27

D., KARDASH, and KOLLAROV O. "Solving optimization problems in energy with genetic algorithm." Journal of Electrical and power engineering 28, no. 1 (June 29, 2023): 37–41. http://dx.doi.org/10.31474/2074-2630-2023-1-37-41.

Full text
Abstract:
The article discusses the application of genetic algorithms in the field of energy optimization. Linear programming is commonly used for optimization problems in energy systems. Linear programming is a mathematical optimization method that seeks the optimal solution under constraints, where all constraints and the objective function are linear functions. In the realm of artificial intelligence,genetic algorithms are employed for optimization tasks. genetic algorithms mimic natural evolution processes, including selection, crossover, mutation, and adaptation, to solve optimization and search problems. The article outlines the process of a genetic algorithm, starting with the formation of an initial population and proceeding through selection, crossover, mutation, and evaluation. This cycle repeats until an optimal solution is achieved. Advantages of genetic algorithms include their ability to handle complex solution spaces, find global optima, adapt to changing conditions, optimize multi-objective functions, and work with non-linear and non-differentiable objective functions. However, they may require significant computational resources and parameter tuning. The article then presents a case study of applying a genetic algorithm to optimize the allocation of a power load in an energy system. The mathematical model is developed, and the simplex method is initially used for solution. Subsequently, a Python program for genetic algorithm implementation is provided. The algorithm's efficiency and convergence are demonstrated through a graphical representation of the optimization process. In conclusion, the article highlights the effectiveness of genetic algorithms in energy optimization, showcasing their rapid convergence and ability to find near-optimal solutions in complex scenarios.
APA, Harvard, Vancouver, ISO, and other styles
28

Sabery, Ghulam Ali, Ghulam Hassan Danishyar, and Ghulam Sarwar Mubarez. "A Comparative Study of Metaheuristic Optimization Algorithms for Solving Engineering Design Problems." Journal of Mathematics and Statistics Studies 4, no. 4 (November 4, 2023): 56–69. http://dx.doi.org/10.32996/jmss.2023.4.4.6.

Full text
Abstract:
Metaheuristic optimization algorithms (Nature-Inspired Optimization Algorithms) are a class of algorithms that mimic the behavior of natural systems such as evolution process, swarm intelligence, human activity and physical phenomena to find the optimal solution. Since the introduction of meta-heuristic optimization algorithms, they have shown their profound impact in solving the high-scale and non-differentiable engineering problems. This paper presents a comparative study of the most widely used nature-inspired optimization algorithms for solving engineering classical design problems, which are considered challenging. The teen metaheuristic algorithms employed in this study are, namely, Artificial Bee Colony (ABC), Ant Colony Optimization (ACO), Biogeography Based Optimization Algorithm (BBO), Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES), Cuckoo Search algorithm (CS), Differential Evolution (DE), Genetic Algorithm (GA), Grey Wolf Optimizer (GWO), Gravitational Search Algorithm (GSA) and Particle Swarm Optimization (PSO). The efficiency of these algorithms is evaluated on teen popular engineering classical design problems using the solution quality and convergence analysis, which verify the applicability of these algorithms to engineering classical constrained design problems. The experimental results demonstrated that all the algorithms provide a competitive solution.
APA, Harvard, Vancouver, ISO, and other styles
29

Aleardi, Mattia, Silvio Pierini, and Angelo Sajeva. "Assessing the performances of recent global search algorithms using analytic objective functions and seismic optimization problems." GEOPHYSICS 84, no. 5 (September 1, 2019): R767—R781. http://dx.doi.org/10.1190/geo2019-0111.1.

Full text
Abstract:
We have compared the performances of six recently developed global optimization algorithms: imperialist competitive algorithm, firefly algorithm (FA), water cycle algorithm (WCA), whale optimization algorithm (WOA), fireworks algorithm (FWA), and quantum particle swarm optimization (QPSO). These methods have been introduced in the past few years and have found very limited or no applications to geophysical exploration problems thus far. We benchmark the algorithms’ results against the particle swarm optimization (PSO), which is a popular and well-established global search method. In particular, we are interested in assessing the exploration and exploitation capabilities of each method as the dimension of the model space increases. First, we test the different algorithms on two multiminima and two convex analytic objective functions. Then, we compare them using the residual statics corrections and 1D elastic full-waveform inversion, which are highly nonlinear geophysical optimization problems. Our results demonstrate that FA, FWA, and WOA are characterized by optimal exploration capabilities because they outperform the other approaches in the case of optimization problems with multiminima objective functions. Differently, QPSO and PSO have good exploitation capabilities because they easily solve ill-conditioned optimizations characterized by a nearly flat valley in the objective function. QPSO, PSO, and WCA offer a good compromise between exploitation and exploration.
APA, Harvard, Vancouver, ISO, and other styles
30

Abdulrahman, Abdulrahman, Ziad Mohammed .., Ahmed Mohamed Zaki, Faris H. H.Rizk, Marwa M. Eid, and EL-Sayed M. EL EL-Kenawy. "Exploring Optimization Algorithms: A Review of Methods and Applications." Journal of Artificial Intelligence and Metaheuristics 7, no. 2 (2024): 08–17. http://dx.doi.org/10.54216/jaim.070201.

Full text
Abstract:
This article review focuses on feature selection as the main parameter that plays a major role in tuning machine learning models. Several algorithms of optimization such as MFO (Moth-Flame Optimization), the GA-GSA algorithm’s hybrid type, SOA (Seagull Optimization Algorithm), WOA (Whale Optimization Algorithm), GOA (Grasshopper Optimization Algorithm), HGSO (Henry Gas Solubility Optimization), and SafeOpt are widely used in engineering design, power systems scheduling, The paper stresses the importance of optimization in improving efficiency, lessening mistakes and increasing understandability of machine learning models. The literature addresses the widest directions in the usage of optimization for the following fields of science such as structural engineering, additive manufacturing, and landslide susceptibility mapping. A comprehensive summary table is generated, which shows an overview of each study, algorithm, focus, and methodology and has a stoke of key findings. The conclusions reveal the adaptiveness, competitiveness and compossibility of the optimization algorithms applied to a wide range of domains. The summary shows how optimization has the potential to change decision-making processes and activities by being a decisive factor that determines the future of branches of various industries. The main objective of this work is to direct researchers and practitioners by providing them with some innovative ideas and approaches and offering insights on the existing cutting-edge approaches while laying the groundwork for future innovations in optimization.
APA, Harvard, Vancouver, ISO, and other styles
31

Peng, Qiang, Renjun Zhan, Husheng Wu, and Meimei Shi. "Comparative Study of Wolf Pack Algorithm and Artificial Bee Colony Algorithm." International Journal of Swarm Intelligence Research 15, no. 1 (August 16, 2024): 1–24. http://dx.doi.org/10.4018/ijsir.352061.

Full text
Abstract:
Swarm intelligence optimization algorithms have been widely used in the fields of machine learning, process control and engineering prediction, among which common algorithms include ant colony algorithm (ACO), artificial bee colony algorithm (ABC) and particle swarm optimization (PSO). Wolf pack algorithm (WPA) as a newer swarm intelligence optimization algorithm has many similarities with ABC. In this paper, the basic principles, algorithm implementation processes, and related improvement strategies of these two algorithms were described in detail; A comparative analysis of their performance in solving different feature-based standard CEC test functions was conducted, with a focus on optimization ability and convergence speed, re-validating the unique characteristics of these two algorithms in searching. In the end, the future development trend and prospect of intelligent optimization algorithms was discussed, which is of great reference significance for the research and application of swarm intelligence optimization algorithms.
APA, Harvard, Vancouver, ISO, and other styles
32

HOSSAİN, Md Al Amin, and Züleyha YILMAZ ACAR. "Comparison of New and Old Optimization Algorithms for Traveling Salesman Problem on Small, Medium, and Large-scale Benchmark Instances." Bitlis Eren Üniversitesi Fen Bilimleri Dergisi 13, no. 1 (December 29, 2023): 216–31. http://dx.doi.org/10.17798/bitlisfen.1380086.

Full text
Abstract:
The Traveling Salesman Problem (TSP), a prominent combinatorial optimization issue, is the subject of this study's evaluation of the performance of new and old optimization techniques. This paper seeks to expand knowledge of optimization techniques and how they might be applied to solve TSP challenges. The goal of the research is to compare various algorithms' scalability, convergence, and computation times on benchmark instances of several sizes. To achieve this goal, this paper carried out extensive testing using the Artificial Bee Colony (ABC), Grey Wolf Optimization (GWO), and Salp Swarm Algorithm (SSA) as new optimization algorithms and the Genetic Algorithm (GA), Ant Colony Optimization (ACO), and Simulated Annealing (SA) as old optimization algorithms. On small, medium, and large-scale benchmark cases, these algorithms were examined. The findings of this investigation show that the new optimization techniques are more convergent and scalable than the old ones, especially for medium-scale scenarios. They perform better performance in terms of solution quality by applying objective function values. The new methods also exhibit improved scalability, successfully adjusting to medium-scale instances. However, there were no discernible changes between the smaller and larger instances. This study makes an impact by offering insightful information about how well optimization methods perform while solving the TSP. Each algorithm's strengths and downsides have been reported, and these details offer useful guidance for choosing an algorithm for a certain scenario. The results also show the practical ramifications of applying novel optimization techniques, especially in medium-scale instances..
APA, Harvard, Vancouver, ISO, and other styles
33

Shi, Yuhui, Jingqian Xue, and Yali Wu. "Multi-Objective Optimization Based on Brain Storm Optimization Algorithm." International Journal of Swarm Intelligence Research 4, no. 3 (July 2013): 1–21. http://dx.doi.org/10.4018/ijsir.2013070101.

Full text
Abstract:
In recent years, many evolutionary algorithms and population-based algorithms have been developed for solving multi-objective optimization problems. In this paper, the authors propose a new multi-objective brain storm optimization algorithm in which the clustering strategy is applied in the objective space instead of in the solution space in the original brain storm optimization algorithm for solving single objective optimization problems. Two versions of multi-objective brain storm optimization algorithm with different characteristics of diverging operation were tested to validate the usefulness and effectiveness of the proposed algorithm. Experimental results show that the proposed multi-objective brain storm optimization algorithm is a very promising algorithm, at least for solving these tested multi-objective optimization problems.
APA, Harvard, Vancouver, ISO, and other styles
34

Che, Yanhui, and Dengxu He. "A Hybrid Whale Optimization with Seagull Algorithm for Global Optimization Problems." Mathematical Problems in Engineering 2021 (January 28, 2021): 1–31. http://dx.doi.org/10.1155/2021/6639671.

Full text
Abstract:
Seagull optimization algorithm (SOA) inspired by the migration and attack behavior of seagulls in nature is used to solve the global optimization problem. However, like other well-known metaheuristic algorithms, SOA has low computational accuracy and premature convergence. Therefore, in the current work, these problems are solved by proposing the modified version of SOA. This paper proposes a novel hybrid algorithm, called whale optimization with seagull algorithm (WSOA), for solving global optimization problems. The main reason is that the spiral attack prey of seagulls is very similar to the predation behavior of whale bubble net, and the WOA has strong global search ability. Therefore, firstly, this paper combines WOA’s contraction surrounding mechanism with SOA’s spiral attack behavior to improve the calculation accuracy of SOA. Secondly, the levy flight strategy is introduced into the search formula of SOA, which can effectively avoid premature convergence of algorithms and balance exploration and exploitation among algorithms more effectively. In order to evaluate the effectiveness of solving global optimization problems, 25 benchmark test functions are tested, and WSOA is compared with seven famous metaheuristic algorithms. Statistical analysis and results comparison show that WSOA has obvious advantages compared with other algorithms. Finally, four engineering examples are tested with the proposed algorithm, and the effectiveness and feasibility of WSOA are verified.
APA, Harvard, Vancouver, ISO, and other styles
35

Akyol, Sinem, Muhammed Yildirim, and Bilal Alatas. "CIDO: Chaotically Initialized Dandelion Optimization for Global Optimization." International Journal of Advanced Networking and Applications 14, no. 06 (2023): 5696–704. http://dx.doi.org/10.35444/ijana.2023.14606.

Full text
Abstract:
Metaheuristic algorithms are widely used for problems in many fields such as security, health, engineering. No metaheuristic algorithm can achieve the optimum solution for all optimization problems. For this, new metaheuristic methods are constantly being proposed and existing ones are being developed. Dandelion Optimizer, one of the most recent metaheuristic algorithms, is biology-based. Inspired by the wind-dependent long-distance flight of the ripening seed of the dandelion plant. It consists of three phases: ascending phase, descending phase and landing phase. In this study, the chaos-based version of Chaotically Initialized Dandelion Optimizer is proposed for the first time in order to prevent Dandelion Optimizer from getting stuck in local solutions and to increase its success in global search. In this way, it is aimed to increase global convergence and to prevent sticking to a local solution. While creating the initial population of the algorithm, six different Chaotically Initialized Dandelion Optimizer algorithms were presented using the Circle, Singer, Chebyshev, Gauss/Mouse, Iterative and Logistic chaotic maps. Two unimodal (Sphere and Schwefel 2.22), two multimodal (Schwefel and Rastrigin) and two fixed-dimension multimodal (Foxholes and Kowalik) quality test functions were used to compare the performances of the algorithms. When the experimental results were analyzed, it was seen that the Chaotically Initialized Dandelion Optimizer algorithms gave successful results compared to the classical Dandelion Optimizer.
APA, Harvard, Vancouver, ISO, and other styles
36

Yao, Jinyan, Yongbai Sha, Yanli Chen, and Xiaoying Zhao. "A Novel Ensemble of Arithmetic Optimization Algorithm and Harris Hawks Optimization for Solving Industrial Engineering Optimization Problems." Machines 10, no. 8 (July 24, 2022): 602. http://dx.doi.org/10.3390/machines10080602.

Full text
Abstract:
Recently, numerous new meta-heuristic algorithms have been proposed for solving optimization problems. According to the Non-Free Lunch theorem, we learn that no single algorithm can solve all optimization problems. In order to solve industrial engineering design problems more efficiently, we, inspired by the algorithm framework of the Arithmetic Optimization Algorithm (AOA) and the Harris Hawks Optimization (HHO), propose a novel hybrid algorithm based on these two algorithms, named EAOAHHO in this paper. The pinhole imaging opposition-based learning is introduced into the proposed algorithm to increase the original population diversity and the capability to escape from local optima. Furthermore, the introduction of composite mutation strategy enhances the proposed EAOAHHO exploitation and exploration to obtain better convergence accuracy. The performance of EAOAHHO is verified on 23 benchmark functions and the IEEE CEC2017 test suite. Finally, we verify the superiority of the proposed EAOAHHO over the other advanced meta-heuristic algorithms for solving four industrial engineering design problems.
APA, Harvard, Vancouver, ISO, and other styles
37

Deepak, Malini, and Rabee Rustum. "Review of Latest Advances in Nature-Inspired Algorithms for Optimization of Activated Sludge Processes." Processes 11, no. 1 (December 28, 2022): 77. http://dx.doi.org/10.3390/pr11010077.

Full text
Abstract:
The activated sludge process (ASP) is the most widely used biological wastewater treatment system. Advances in research have led to the adoption of Artificial Intelligence (AI), in particular, Nature-Inspired Algorithm (NIA) techniques such as Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO) to optimize treatment systems. This has aided in reducing the complexity and computational time of ASP modelling. This paper covers the latest NIAs used in ASP and discusses the advantages and limitations of each algorithm compared to more traditional algorithms that have been utilized over the last few decades. Algorithms were assessed based on whether they looked at real/ideal treatment plant (WWTP) data (and efficiency) and whether they outperformed the traditional algorithms in optimizing the ASP. While conventional algorithms such as Genetic Algorithms (GAs), Particle Swarm Optimization (PSO), and Ant Colony Optimization (ACO) were found to be successfully employed in optimization techniques, newer algorithms such as Whale Optimization Algorithm (WOA), Bat Algorithm (BA), and Intensive Weed Optimization Algorithm (IWO) achieved similar results in the optimization of the ASP, while also having certain unique advantages.
APA, Harvard, Vancouver, ISO, and other styles
38

Wang, Yi, and Kangshun Li. "A Lévy Flight-Inspired Random Walk Algorithm for Continuous Fitness Landscape Analysis." International Journal of Cognitive Informatics and Natural Intelligence 17, no. 1 (September 21, 2023): 1–18. http://dx.doi.org/10.4018/ijcini.330535.

Full text
Abstract:
Heuristic algorithms are effective methods for solving complex optimization problems. The optimal algorithm selection for a specific optimization problem is a challenging task. Fitness landscape analysis (FLA) is used to understand the optimization problem's characteristics and help select the optimal algorithm. A random walk algorithm is an essential technique for FLA in continuous search space. However, most currently proposed random walk algorithms suffer from unbalanced sampling points. This article proposes a Lévy flight-based random walk (LRW) algorithm to address this problem. The Lévy flight is used to generate the proposed random walk algorithm's variable step size and direction. Some tests show that the proposed LRW algorithm performs better in the uniformity of sampling points. Besides, the authors analyze the fitness landscape of the CEC2017 benchmark functions using the proposed LRW algorithm. The experimental results indicate that the proposed LRW algorithm can better obtain the structural features of the landscape and has better stability than several other RW algorithms.
APA, Harvard, Vancouver, ISO, and other styles
39

Kota, László, and Károly Jármai. "Improving optimization using adaptive algorithms." Pollack Periodica 16, no. 1 (March 25, 2021): 14–18. http://dx.doi.org/10.1556/606.2020.00180.

Full text
Abstract:
AbstractIn the research projects and industrial projects severe optimization problems can be met, where the number of variables is high, there are a lot of constraints, and they are highly nonlinear and mostly discrete issues, where the running time can be calculated sometimes in weeks with the usual optimization methods on an average computer. In most cases in the logistics industry, the most robust constraint is the time. The optimizations are running on a typical office configuration, and the company accepts the suboptimal solution what the optimization method gives within the appropriate time limit. That is, why adaptivity is needed. The adaptivity of the optimization technique includes parameters of fine-tuning. On this way, the most sensitive setting can be found. In this article, some additional adaptive methods for logistic problems have been investigated to increase the effectivity, improve the solution in a strict time condition.
APA, Harvard, Vancouver, ISO, and other styles
40

Kaya, Ersin, Alper Kılıç, İsmail Babaoğlu, and Ahmet Babalık. "FUZZY ADAPTIVE WHALE OPTIMIZATION ALGORITHM FOR NUMERIC OPTIMIZATION." Malaysian Journal of Computer Science 34, no. 2 (April 30, 2021): 184–98. http://dx.doi.org/10.22452/mjcs.vol34no2.4.

Full text
Abstract:
Meta-heuristic approaches are used as a powerful tool for solving numeric optimization problems. Since these problems are deeply concerned with their diversified characteristics, investigation of the utilization of algorithms is significant for the researchers. Whale optimization algorithm (WOA) is one of the novel meta-heuristic algorithms employed for solving numeric optimization problems. WOA deals with exploitation and exploration of the search space in three stages, and in every stage, all dimensions of the candidate solutions are updated. The drawback of this update scheme is to lead the convergence of the algorithm to stack. Some known meta-heuristic approaches treat this issue by updating one or a predetermined number of dimensions in their update scheme. To improve the exploitation behavior of WOA, a fuzzy logic controller (FLC) based adaptive WOA (FAWOA) is suggested in this study. An FLC realizes the update scheme of WOA, and the proposed FLC determines the rate of the change in terms of dimension. The suggested FAWOA is evaluated using 23 well-known benchmark problems and compared with some other meta-heuristic approaches. Considering the benchmark problems, FAWOA achieves best results on 11 problem and only differential evaluation algorithm achieve best results on 10 problems. The rest of the algorithms couldn’t achieve the best results on not more than 5 problems. Besides, according to the Friedman and average ranking tests, FAWOA is the first ranked algorithm for solving the benchmark problems. Evaluation results show that the suggested FAWOA approach outperforms the other algorithms as well as the WOA in most of the benchmark problems.
APA, Harvard, Vancouver, ISO, and other styles
41

Shu-Chuan Chu, Shu-Chuan Chu, Qing Feng Shu-Chuan Chu, Jia Zhao Qing Feng, and Jeng-Shyang Pan Jia Zhao. "BFGO: Bamboo Forest Growth Optimization Algorithm." 網際網路技術學刊 24, no. 1 (January 2023): 001–10. http://dx.doi.org/10.53106/160792642023012401001.

Full text
Abstract:
<p>The heuristic optimization algorithm is a popular optimization method for solving optimization problems. However, the main disadvantage of this type of algorithm is the unstable performance, and the performance depends on the specific problem and the designer’s experience, so inappropriate algorithms or parameters will lead to poor performance for solving optimization. In this paper, a new meta-heuristic algorithm&mdash;bamboo forest growth optimization (BFGO) algorithm is proposed. The BFGO algorithm refers to the growth law of bamboo: first, take root clearly, and then grow wildly. The growth characteristics of bamboo forests and the optimization process of the algorithm are integrated, and the test is carried out on 30 functions of the CEC2017 test set, and the algorithm is applied to four classical problems of engineering optimization. Compared with well-known heuristic algorithms, the BFGO algorithm has better performance.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
42

Soghrati, F., and R. Moeini. "Deriving optimal operation of reservoir proposing improved artificial bee colony algorithm: standard and constrained versions." Journal of Hydroinformatics 22, no. 2 (December 24, 2019): 263–80. http://dx.doi.org/10.2166/hydro.2019.125.

Full text
Abstract:
Abstract In this paper, one of the newest meta-heuristic algorithms, named artificial bee colony (ABC) algorithm, is used to solve the single-reservoir operation optimization problem. The simple and hydropower reservoir operation optimization problems of Dez reservoir, in southern Iran, have been solved here over 60, 240, and 480 monthly operation time periods considering two different decision variables. In addition, to improve the performance of this algorithm, two improved artificial bee colony algorithms have been proposed and these problems have been solved using them. Furthermore, in order to improve the performance of proposed algorithms to solve large-scale problems, two constrained versions of these algorithms have been proposed, in which in these algorithms the problem constraints have been explicitly satisfied. Comparison of the results shows that using the proposed algorithm leads to better results with low computational costs in comparison with other available methods such as genetic algorithm (GA), standard and improved particle swarm optimization (IPSO) algorithm, honey-bees mating optimization (HBMO) algorithm, ant colony optimization algorithm (ACOA), and gravitational search algorithm (GSA). Therefore, the proposed algorithms are capable algorithms to solve large reservoir operation optimization problems.
APA, Harvard, Vancouver, ISO, and other styles
43

Brooks, Stephen P. "Algorithms AS 298: A Hybrid Optimization Algorithm." Applied Statistics 44, no. 4 (1995): 530. http://dx.doi.org/10.2307/2986143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Chatain, Peter, Rocky Garg, and Lauren Tompkins. "Evolutionary Algorithms for Tracking Algorithm Parameter Optimization." EPJ Web of Conferences 251 (2021): 03071. http://dx.doi.org/10.1051/epjconf/202125103071.

Full text
Abstract:
The reconstruction of charged particle trajectories, known as tracking, is one of the most complex and CPU consuming parts of event processing in high energy particle physics experiments. The most widely used and best performing tracking algorithms require significant geometry-specific tuning of the algorithm parameters to achieve best results. In this paper, we demonstrate the usage of machine learning techniques, particularly evolutionary algorithms, to find high performing configurations for the first step of tracking, called track seeding. We use a track seeding algorithm from the software framework A Common Tracking Software (ACTS). ACTS aims to provide an experimentindependent and framework-independent tracking software designed for modern computing architectures. We show that our optimization algorithms find highly performing configurations in ACTS without hand-tuning. These techniques can be applied to other reconstruction tasks, improving performance and reducing the need for laborious hand-tuning of parameters.
APA, Harvard, Vancouver, ISO, and other styles
45

Behera, Mandakini, Archana Sarangi, Debahuti Mishra, Pradeep Kumar Mallick, Jana Shafi, Parvathaneni Naga Srinivasu, and Muhammad Fazal Ijaz. "Automatic Data Clustering by Hybrid Enhanced Firefly and Particle Swarm Optimization Algorithms." Mathematics 10, no. 19 (September 28, 2022): 3532. http://dx.doi.org/10.3390/math10193532.

Full text
Abstract:
Data clustering is a process of arranging similar data in different groups based on certain characteristics and properties, and each group is considered as a cluster. In the last decades, several nature-inspired optimization algorithms proved to be efficient for several computing problems. Firefly algorithm is one of the nature-inspired metaheuristic optimization algorithms regarded as an optimization tool for many optimization issues in many different areas such as clustering. To overcome the issues of velocity, the firefly algorithm can be integrated with the popular particle swarm optimization algorithm. In this paper, two modified firefly algorithms, namely the crazy firefly algorithm and variable step size firefly algorithm, are hybridized individually with a standard particle swarm optimization algorithm and applied in the domain of clustering. The results obtained by the two planned hybrid algorithms have been compared with the existing hybridized firefly particle swarm optimization algorithm utilizing ten UCI Machine Learning Repository datasets and eight Shape sets for performance evaluation. In addition to this, two clustering validity measures, Compact-separated and David–Bouldin, have been used for analyzing the efficiency of these algorithms. The experimental results show that the two proposed hybrid algorithms outperform the existing hybrid firefly particle swarm optimization algorithm.
APA, Harvard, Vancouver, ISO, and other styles
46

Dehghani, Mohammad, Zeinab Montazeri, Gaurav Dhiman, O. P. Malik, Ruben Morales-Menendez, Ricardo A. Ramirez-Mendoza, Ali Dehghani, Josep M. Guerrero, and Lizeth Parra-Arroyo. "A Spring Search Algorithm Applied to Engineering Optimization Problems." Applied Sciences 10, no. 18 (September 4, 2020): 6173. http://dx.doi.org/10.3390/app10186173.

Full text
Abstract:
At present, optimization algorithms are used extensively. One particular type of such algorithms includes random-based heuristic population optimization algorithms, which may be created by modeling scientific phenomena, like, for example, physical processes. The present article proposes a novel optimization algorithm based on Hooke’s law, called the spring search algorithm (SSA), which aims to solve single-objective constrained optimization problems. In the SSA, search agents are weights joined through springs, which, as Hooke’s law states, possess a force that corresponds to its length. The mathematics behind the algorithm are presented in the text. In order to test its functionality, it is executed on 38 established benchmark test functions and weighed against eight other optimization algorithms: a genetic algorithm (GA), a gravitational search algorithm (GSA), a grasshopper optimization algorithm (GOA), particle swarm optimization (PSO), teaching–learning-based optimization (TLBO), a grey wolf optimizer (GWO), a spotted hyena optimizer (SHO), as well as an emperor penguin optimizer (EPO). To test the SSA’s usability, it is employed on five engineering optimization problems. The SSA delivered better fitting results than the other algorithms in unimodal objective function, multimodal objective functions, CEC 2015, in addition to the optimization problems in engineering.
APA, Harvard, Vancouver, ISO, and other styles
47

Schutte, Jaco F., Byung-Il Koh, Jeffrey A. Reinbolt, Raphael T. Haftka, Alan D. George, and Benjamin J. Fregly. "Evaluation of a Particle Swarm Algorithm For Biomechanical Optimization." Journal of Biomechanical Engineering 127, no. 3 (January 31, 2005): 465–74. http://dx.doi.org/10.1115/1.1894388.

Full text
Abstract:
Optimization is frequently employed in biomechanics research to solve system identification problems, predict human movement, or estimate muscle or other internal forces that cannot be measured directly. Unfortunately, biomechanical optimization problems often possess multiple local minima, making it difficult to find the best solution. Furthermore, convergence in gradient-based algorithms can be affected by scaling to account for design variables with different length scales or units. In this study we evaluate a recently- developed version of the particle swarm optimization (PSO) algorithm to address these problems. The algorithm’s global search capabilities were investigated using a suite of difficult analytical test problems, while its scale-independent nature was proven mathematically and verified using a biomechanical test problem. For comparison, all test problems were also solved with three off-the-shelf optimization algorithms—a global genetic algorithm (GA) and multistart gradient-based sequential quadratic programming (SQP) and quasi-Newton (BFGS) algorithms. For the analytical test problems, only the PSO algorithm was successful on the majority of the problems. When compared to previously published results for the same problems, PSO was more robust than a global simulated annealing algorithm but less robust than a different, more complex genetic algorithm. For the biomechanical test problem, only the PSO algorithm was insensitive to design variable scaling, with the GA algorithm being mildly sensitive and the SQP and BFGS algorithms being highly sensitive. The proposed PSO algorithm provides a new off-the-shelf global optimization option for difficult biomechanical problems, especially those utilizing design variables with different length scales or units.
APA, Harvard, Vancouver, ISO, and other styles
48

Vanitha, K., B. Jyothi, R. Seshu Kumar, V. S. Chandrika, Arvind R. Singh, and R. M. Naidoo. "A Quantum-Inspired Optimization Strategy for Optimal Dispatch to Increase Heat and Power Efficiency." Journal of Engineering 2024 (May 30, 2024): 1–14. http://dx.doi.org/10.1155/2024/6665062.

Full text
Abstract:
Combined heat and power (CHP) systems are widely used in industries for their high energy efficiency and reduced carbon emissions. The optimal dispatch of CHP systems involves scheduling the operation of various equipment to minimize the total operational cost while meeting the heat and power demand of the facility. In this research work, a novel quantum-inspired optimization algorithm is proposed for the first time to solve the optimal dispatch problem of CHP systems. The proposed algorithm combines the principles of quantum mechanics with classical optimization algorithms to achieve a better solution. The algorithm uses quantum gates to perform quantum operations on the optimization variables, which allows for the exploration of a larger search space and potentially better solutions than classical algorithms. The proposed algorithm also incorporates a classical optimizer to refine the numerical evaluations acquired from the quantum operations. The performance of the adopted optimization technique was demonstrated by associating it with various other optimization techniques based on factors such as the speed of convergence, computational time, and the quality of the solution. The comparison is made on two standard CHP systems subjected to various quality and inequality constraints. The simulation results indicate that the quantum-inspired optimization technique surpassed the other algorithms in both solution quality and computational efficiency. The implemented algorithm provides a promising solution to the optimal dispatch problem of CHP systems. Future research can further explore the application of quantum-inspired optimization algorithms in other energy systems and optimize the algorithm’s parameters to improve its performance.
APA, Harvard, Vancouver, ISO, and other styles
49

S, Gajawada. "Lord Rama Devotees Algorithm: A New Human-Inspired Metaheuristic Optimization Algorithm." Advances in Robotic Technology 1, no. 1 (October 2, 2023): 1–2. http://dx.doi.org/10.23880/art-16000105.

Full text
Abstract:
Several Human-Inspired Metaheuristic Optimization Algorithms were proposed in literature. But the concept of DevoteesInspired Metaheuristic Optimization Algorithms is not yet explored. In this article, Lord Rama Devotees Algorithm (LRDA) is proposed which is a new Devotees-Inspired Metaheuristic Optimization Algorithm.
APA, Harvard, Vancouver, ISO, and other styles
50

Almotairi, Sultan, Elsayed Badr, Mustafa Abdul Salam, and Alshimaa Dawood. "Three Chaotic Strategies for Enhancing the Self-Adaptive Harris Hawk Optimization Algorithm for Global Optimization." Mathematics 11, no. 19 (October 6, 2023): 4181. http://dx.doi.org/10.3390/math11194181.

Full text
Abstract:
Harris Hawk Optimization (HHO) is a well-known nature-inspired metaheuristic model inspired by the distinctive foraging strategy and cooperative behavior of Harris Hawks. As with numerous other algorithms, HHO is susceptible to getting stuck in local optima and has a sluggish convergence rate. Several techniques have been proposed in the literature to improve the performance of metaheuristic algorithms (MAs) and to tackle their limitations. Chaos optimization strategies have been proposed for many years to enhance MAs. There are four distinct categories of Chaos strategies, including chaotic mapped initialization, randomness, iterations, and controlled parameters. This paper introduces SHHOIRC, a novel hybrid algorithm designed to enhance the efficiency of HHO. Self-adaptive Harris Hawk Optimization using three chaotic optimization methods (SHHOIRC) is the proposed algorithm. On 16 well-known benchmark functions, the proposed hybrid algorithm, authentic HHO, and five HHO variants are evaluated. The computational results and statistical analysis demonstrate that SHHOIRC exhibits notable similarities to other previously published algorithms. The proposed algorithm outperformed the other algorithms by 81.25%, compared to 18.75% for the prior algorithms, by obtaining the best average solutions for 13 benchmark functions. Furthermore, the proposed algorithm is tested on a real-life problem, which is the maximum coverage problem of Wireless Sensor Networks (WSNs), and compared with pure HHO, and two well-known algorithms, Grey Wolf Optimization (GWO) and Whale Optimization Algorithm (WOA). For the maximum coverage experiments, the proposed algorithm demonstrated superior performance, surpassing other algorithms by obtaining the best coverage rates of 95.4375% and 97.125% for experiments 1 and 2, respectively.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography