To see the other types of publications on this topic, follow the link: Particle Swarm algorithms.

Journal articles on the topic 'Particle Swarm algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Particle Swarm algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kong, Fanrong, Jianhui Jiang, and Yan Huang. "An Adaptive Multi-Swarm Competition Particle Swarm Optimizer for Large-Scale Optimization." Mathematics 7, no. 6 (June 6, 2019): 521. http://dx.doi.org/10.3390/math7060521.

Full text
Abstract:
As a powerful tool in optimization, particle swarm optimizers have been widely applied to many different optimization areas and drawn much attention. However, for large-scale optimization problems, the algorithms exhibit poor ability to pursue satisfactory results due to the lack of ability in diversity maintenance. In this paper, an adaptive multi-swarm particle swarm optimizer is proposed, which adaptively divides a swarm into several sub-swarms and a competition mechanism is employed to select exemplars. In this way, on the one hand, the diversity of exemplars increases, which helps the swarm preserve the exploitation ability. On the other hand, the number of sub-swarms adaptively changes from a large value to a small value, which helps the algorithm make a suitable balance between exploitation and exploration. By employing several peer algorithms, we conducted comparisons to validate the proposed algorithm on a large-scale optimization benchmark suite of CEC 2013. The experiments results demonstrate the proposed algorithm is effective and competitive to address large-scale optimization problems.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Hong Ying. "Utilize Improved Particle Swarm to Predict Traffic Flow." Advanced Materials Research 756-759 (September 2013): 3744–48. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.3744.

Full text
Abstract:
Presented an improved particle swarm optimization algorithm, introduced a crossover operation for the particle location, interfered the particles speed, made inert particles escape the local optimum points, enhanced PSO algorithm's ability to break away from local extreme point. Utilized improved algorithms to train the RBF neural network models, predict short-time traffic flow of a region intelligent traffic control. Simulation and test results showed that, the improved algorithm can effetely forecast short-time traffic flow of the regional intelligent transportation control, forecasting effects is better can be effectively applied to actual traffic control.
APA, Harvard, Vancouver, ISO, and other styles
3

Lenin, K. "CROWDING DISTANCE BASED PARTICLE SWARM OPTIMIZATION ALGORITHM FOR SOLVING OPTIMAL REACTIVE POWER DISPATCH PROBLEM." International Journal of Research -GRANTHAALAYAH 6, no. 6 (June 30, 2018): 226–37. http://dx.doi.org/10.29121/granthaalayah.v6.i6.2018.1369.

Full text
Abstract:
In this paper, Crowding Distance based Particle Swarm Optimization (CDPSO) algorithm has been proposed to solve the optimal reactive power dispatch problem. Particle Swarm Optimization (PSO) is swarm intelligence-based exploration and optimization algorithm which is used to solve global optimization problems. In PSO, the population is referred as a swarm and the individuals are called particles. Like other evolutionary algorithms, PSO performs searches using a population of individuals that are updated from iteration to iteration. The crowding distance is introduced as the index to judge the distance between the particle and the adjacent particle, and it reflects the congestion degree of no dominated solutions. In the population, the larger the crowding distance, the sparser and more uniform. In the feasible solution space, we uniformly and randomly initialize the particle swarms and select the no dominated solution particles consisting of the elite set. After that by the methods of congestion degree choosing (the congestion degree can make the particles distribution more sparse) and the dynamic e infeasibility dominating the constraints, we remove the no dominated particles in the elite set. Then, the objectives can be approximated. Proposed crowding distance based Particle Swarm Optimization (CDPSO) algorithm has been tested in standard IEEE 30 bus test system and simulation results shows clearly the improved performance of the projected algorithm in reducing the real power loss and static voltage stability margin has been enhanced.
APA, Harvard, Vancouver, ISO, and other styles
4

Baktybekov, K. "PARTICLE SWARM OPTIMIZATION WITH INDIVIDUALLY BIASED PARTICLES FOR RELIABLE AND ROBUST MAXIMUM POWER POINT TRACKING UNDER PARTIAL SHADING CONDITIONS." Eurasian Physical Technical Journal 17, no. 2 (December 24, 2020): 128–37. http://dx.doi.org/10.31489/2020no2/128-137.

Full text
Abstract:
Efficient power control techniques are an integral part of photovoltaic system design. One of the means of managing power delivery is regulating the duty cycle of the DC to DC converter by various algorithms to operate only at points where power is maximum power point. Search has to be done as fast as possible to minimize power loss, especially under dynamically changing irradiance. The challenge of the task is the nonlinear behavior of the PV system under partial shading conditions. Depending on the size and structure of the photovoltaic panels, PSC creates an immense amount of possible P-V curves with numerous local maximums - requiring an intelligent algorithm for determining the optimal operating point. Existing benchmark maximum power point tracking algorithms cannot handle multiple peaks, and in this paper, we offer an adaptation of particle swarm optimization for the specific task.
APA, Harvard, Vancouver, ISO, and other styles
5

Weikert, Dominik, Sebastian Mai, and Sanaz Mostaghim. "Particle Swarm Contour Search Algorithm." Entropy 22, no. 4 (April 2, 2020): 407. http://dx.doi.org/10.3390/e22040407.

Full text
Abstract:
In this article, we present a new algorithm called Particle Swarm Contour Search (PSCS)—a Particle Swarm Optimisation inspired algorithm to find object contours in 2D environments. Currently, most contour-finding algorithms are based on image processing and require a complete overview of the search space in which the contour is to be found. However, for real-world applications this would require a complete knowledge about the search space, which may not be always feasible or possible. The proposed algorithm removes this requirement and is only based on the local information of the particles to accurately identify a contour. Particles search for the contour of an object and then traverse alongside using their known information about positions in- and out-side of the object. Our experiments show that the proposed PSCS algorithm can deliver comparable results as the state-of-the-art.
APA, Harvard, Vancouver, ISO, and other styles
6

Yao, Wenting, and Yongjun Ding. "Smart City Landscape Design Based on Improved Particle Swarm Optimization Algorithm." Complexity 2020 (December 1, 2020): 1–10. http://dx.doi.org/10.1155/2020/6693411.

Full text
Abstract:
Aiming at the shortcomings of standard particle swarm optimization (PSO) algorithms that easily fall into local optimum, this paper proposes an optimization algorithm (LTQPSO) that improves quantum behavioral particle swarms. Aiming at the problem of premature convergence of the particle swarm algorithm, the evolution speed of individual particles and the population dispersion are used to dynamically adjust the inertia weights to make them adaptive and controllable, thereby avoiding premature convergence. At the same time, the natural selection method is introduced into the traditional position update formula to maintain the diversity of the population, strengthen the global search ability of the LTQPSO algorithm, and accelerate the convergence speed of the algorithm. The improved LTQPSO algorithm is applied to landscape trail path planning, and the research results prove the effectiveness and feasibility of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
7

Yan, Zheping, Chao Deng, Benyin Li, and Jiajia Zhou. "Novel Particle Swarm Optimization and Its Application in Calibrating the Underwater Transponder Coordinates." Mathematical Problems in Engineering 2014 (2014): 1–12. http://dx.doi.org/10.1155/2014/672412.

Full text
Abstract:
A novel improved particle swarm algorithm named competition particle swarm optimization (CPSO) is proposed to calibrate the Underwater Transponder coordinates. To improve the performance of the algorithm, TVAC algorithm is introduced into CPSO to present anextension competition particle swarm optimization(ECPSO). The proposed method is tested with a set of 10 standard optimization benchmark problems and the results are compared with those obtained through existing PSO algorithms,basic particle swarm optimization(BPSO),linear decreasing inertia weight particle swarm optimization(LWPSO),exponential inertia weight particle swarm optimization(EPSO), andtime-varying acceleration coefficient(TVAC). The results demonstrate that CPSO and ECPSO manifest faster searching speed, accuracy, and stability. The searching performance for multimodulus function of ECPSO is superior to CPSO. At last, calibration of the underwater transponder coordinates is present using particle swarm algorithm, and novel improved particle swarm algorithm shows better performance than other algorithms.
APA, Harvard, Vancouver, ISO, and other styles
8

Fan, Shu-Kai S., and Chih-Hung Jen. "An Enhanced Partial Search to Particle Swarm Optimization for Unconstrained Optimization." Mathematics 7, no. 4 (April 17, 2019): 357. http://dx.doi.org/10.3390/math7040357.

Full text
Abstract:
Particle swarm optimization (PSO) is a population-based optimization technique that has been applied extensively to a wide range of engineering problems. This paper proposes a variation of the original PSO algorithm for unconstrained optimization, dubbed the enhanced partial search particle swarm optimizer (EPS-PSO), using the idea of cooperative multiple swarms in an attempt to improve the convergence and efficiency of the original PSO algorithm. The cooperative searching strategy is particularly devised to prevent the particles from being trapped into the local optimal solutions and tries to locate the global optimal solution efficiently. The effectiveness of the proposed algorithm is verified through the simulation study where the EPS-PSO algorithm is compared to a variety of exiting “cooperative” PSO algorithms in terms of noted benchmark functions.
APA, Harvard, Vancouver, ISO, and other styles
9

Lenin, K. "TAILORED PARTICLE SWARM OPTIMIZATION ALGORITHM FOR SOLVING OPTIMAL REACTIVE POWER PROBLEM." International Journal of Research -GRANTHAALAYAH 5, no. 12 (June 30, 2020): 246–55. http://dx.doi.org/10.29121/granthaalayah.v5.i12.2017.500.

Full text
Abstract:
This paper presents Tailored Particle Swarm Optimization (TPSO) algorithm for solving optimal reactive power problem. Particle Swarm optimization algorithm based on Membrane Computing is proposed to solve the problem. Tailored Particle Swarm Optimization (TPSO) algorithm designed with the framework and rules of a cell-like P systems, and particle swarm optimization with the neighbourhood search. In order to evaluate the efficiency of the proposed algorithm, it has been tested on standard IEEE 118 & practical 191 bus test systems and compared to other specified algorithms. Simulation results show that Tailored Particle Swarm Optimization (TPSO) algorithm is superior to other algorithms in reducing the real power loss.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhao, Jing Ying, Hai Guo, and Xiao Niu Li. "Research on Algorithm Optimization of Hidden Units Data Centre of RBF Neural Network." Advanced Materials Research 831 (December 2013): 486–89. http://dx.doi.org/10.4028/www.scientific.net/amr.831.486.

Full text
Abstract:
Common algorithms of selecting hidden unit data center in RBF neural networks were first discussed in this essay, i.e. k-means algorithm, subtractive clustering algorithm and orthogonal least squares. Meanwhile, a hybrid algorithm mixed of k-means algorithm and particle swarm optimization algorithm was put forward. The algorithm used the position of the particles in particle swarm optimization algorithm to help deal with the defects of local clusters resulted from k-means algorithm and to make optimization with the optimal fitness of k-means particle swarm with the aim to make the final optimal fitness better satisfy the requirements.
APA, Harvard, Vancouver, ISO, and other styles
11

Ababneh, Jehad. "Greedy particle swarm and biogeography-based optimization algorithm." International Journal of Intelligent Computing and Cybernetics 8, no. 1 (March 9, 2015): 28–49. http://dx.doi.org/10.1108/ijicc-01-2014-0003.

Full text
Abstract:
Purpose – The purpose of this paper is to propose an algorithm that combines the particle swarm optimization (PSO) with the biogeography-based optimization (BBO) algorithm. Design/methodology/approach – The BBO and the PSO algorithms are jointly used in to order to combine the advantages of both algorithms. The efficiency of the proposed algorithm is tested using some selected standard benchmark functions. The performance of the proposed algorithm is compared with that of the differential evolutionary (DE), genetic algorithm (GA), PSO, BBO, blended BBO and hybrid BBO-DE algorithms. Findings – Experimental results indicate that the proposed algorithm outperforms the BBO, PSO, DE, GA, and the blended BBO algorithms and has comparable performance to that of the hybrid BBO-DE algorithm. However, the proposed algorithm is simpler than the BBO-DE algorithm since the PSO does not have complex operations such as mutation and crossover used in the DE algorithm. Originality/value – The proposed algorithm is a generic algorithm that can be used to efficiently solve optimization problems similar to that solved using other popular evolutionary algorithms but with better performance.
APA, Harvard, Vancouver, ISO, and other styles
12

Hacibeyoglu, Mehmet, and Mohammed H. Ibrahim. "A Novel Multimean Particle Swarm Optimization Algorithm for Nonlinear Continuous Optimization: Application to Feed-Forward Neural Network Training." Scientific Programming 2018 (July 4, 2018): 1–9. http://dx.doi.org/10.1155/2018/1435810.

Full text
Abstract:
Multilayer feed-forward artificial neural networks are one of the most frequently used data mining methods for classification, recognition, and prediction problems. The classification accuracy of a multilayer feed-forward artificial neural networks is proportional to training. A well-trained multilayer feed-forward artificial neural networks can predict the class value of an unseen sample correctly if provided with the optimum weights. Determining the optimum weights is a nonlinear continuous optimization problem that can be solved with metaheuristic algorithms. In this paper, we propose a novel multimean particle swarm optimization algorithm for multilayer feed-forward artificial neural networks training. The proposed multimean particle swarm optimization algorithm searches the solution space more efficiently with multiple swarms and finds better solutions than particle swarm optimization. To evaluate the performance of the proposed multimean particle swarm optimization algorithm, experiments are conducted on ten benchmark datasets from the UCI repository and the obtained results are compared to the results of particle swarm optimization and other previous research in the literature. The analysis of the results demonstrated that the proposed multimean particle swarm optimization algorithm performed well and it can be adopted as a novel algorithm for multilayer feed-forward artificial neural networks training.
APA, Harvard, Vancouver, ISO, and other styles
13

Mengxia, Li, Liao Ruiquan, and Dong Yong. "The Particle Swarm Optimization Algorithm with Adaptive Chaos Perturbation." International Journal of Computers Communications & Control 11, no. 6 (October 17, 2016): 804. http://dx.doi.org/10.15837/ijccc.2016.6.2525.

Full text
Abstract:
Aiming at the two characteristics of premature convergence of particle swarm optimization that the particle velocity approaches 0 and particle swarm congregate, this paper learns from the annealing function of the simulated annealing algorithm and adaptively and dynamically adjusts inertia weights according to the velocity information of particles to avoid approaching 0 untimely. This paper uses the good uniformity of Anderson chaotic mapping and performs chaos perturbation to part of particles based on the information of variance of the population’s fitness to avoid the untimely aggregation of particle swarm. The numerical simulations of five test functions are performed and the results are compared with several swarm intelligence heuristic algorithms. The results shows that the modified algorithm can keep the population diversity well in the middle stage of the iterative process and it can improve the mean best of the algorithm and the success rate of search.
APA, Harvard, Vancouver, ISO, and other styles
14

Yamanaka, Yoshikazu, and Katsutoshi Yoshida. "Simple gravitational particle swarm algorithm for multimodal optimization problems." PLOS ONE 16, no. 3 (March 18, 2021): e0248470. http://dx.doi.org/10.1371/journal.pone.0248470.

Full text
Abstract:
In real world situations, decision makers prefer to have multiple optimal solutions before making a final decision. Aiming to help the decision makers even if they are non-experts in optimization algorithms, this study proposes a new and simple multimodal optimization (MMO) algorithm called the gravitational particle swarm algorithm (GPSA). Our GPSA is developed based on the concept of “particle clustering in the absence of clustering procedures”. Specifically, it simply replaces the global feedback term in classical particle swarm optimization (PSO) with an inverse-square gravitational force term between the particles. The gravitational force mutually attracts and repels the particles, enabling them to autonomously and dynamically generate sub-swarms in the absence of algorithmic clustering procedures. Most of the sub-swarms gather at the nearby global optima, but a small number of particles reach the distant optima. The niching behavior of our GPSA was tested first on simple MMO problems, and then on twenty MMO benchmark functions. The performance indices (peak ratio and success rate) of our GPSA were compared with those of existing niching PSOs (ring-topology PSO and fitness Euclidean-distance ratio PSO). The basic performance of our GPSA was comparable to that of the existing methods. Furthermore, an improved GPSA with a dynamic parameter delivered significantly superior results to the existing methods on at least 60% of the tested benchmark functions.
APA, Harvard, Vancouver, ISO, and other styles
15

Fernandes, Carlos M., Nuno Fachada, Juan-Julián Merelo, and Agostinho C. Rosa. "Steady state particle swarm." PeerJ Computer Science 5 (August 26, 2019): e202. http://dx.doi.org/10.7717/peerj-cs.202.

Full text
Abstract:
This paper investigates the performance and scalability of a new update strategy for the particle swarm optimization (PSO) algorithm. The strategy is inspired by the Bak–Sneppen model of co-evolution between interacting species, which is basically a network of fitness values (representing species) that change over time according to a simple rule: the least fit species and its neighbors are iteratively replaced with random values. Following these guidelines, a steady state and dynamic update strategy for PSO algorithms is proposed: only the least fit particle and its neighbors are updated and evaluated in each time-step; the remaining particles maintain the same position and fitness, unless they meet the update criterion. The steady state PSO was tested on a set of unimodal, multimodal, noisy and rotated benchmark functions, significantly improving the quality of results and convergence speed of the standard PSOs and more sophisticated PSOs with dynamic parameters and neighborhood. A sensitivity analysis of the parameters confirms the performance enhancement with different parameter settings and scalability tests show that the algorithm behavior is consistent throughout a substantial range of solution vector dimensions.
APA, Harvard, Vancouver, ISO, and other styles
16

Qian, Yu Xia, K. Dong, and X. N. Zhang. "Two New Parallel Algorithms Based on QPSO." Applied Mechanics and Materials 743 (March 2015): 325–32. http://dx.doi.org/10.4028/www.scientific.net/amm.743.325.

Full text
Abstract:
Based on the analysis of classical particle swarm optimization (PSO) algorithm, we adopted Sun’s theory that has the behavior of quantum particle swarm optimization (QPSO) algorithm, by analyzing the algorithm natural parallelism and combined with parallel computer high-speed parallelism, we put forward a new parallel with the behavior of quantum particle swarm optimization (PQPSO) algorithm. On this basis, introduced the island model, relative to the fine-grained has two quantum behavior of particle swarm,m optimization algorithm, the proposed two kinds of coarse-grained parallel based on multiple populations has the behavior of quantum particle swarm optimization (QPSO) algorithm. Finally under the environment of MPI parallel machine using benchmark functions to do the numerical test, and a comparative analysis with other optimization algorithms. Results show that based on the global optimal value is superior to the exchange of data based on local optimum values of exchange, but in the comparison of time is just the opposite.
APA, Harvard, Vancouver, ISO, and other styles
17

Schutte, Jaco F., Byung-Il Koh, Jeffrey A. Reinbolt, Raphael T. Haftka, Alan D. George, and Benjamin J. Fregly. "Evaluation of a Particle Swarm Algorithm For Biomechanical Optimization." Journal of Biomechanical Engineering 127, no. 3 (January 31, 2005): 465–74. http://dx.doi.org/10.1115/1.1894388.

Full text
Abstract:
Optimization is frequently employed in biomechanics research to solve system identification problems, predict human movement, or estimate muscle or other internal forces that cannot be measured directly. Unfortunately, biomechanical optimization problems often possess multiple local minima, making it difficult to find the best solution. Furthermore, convergence in gradient-based algorithms can be affected by scaling to account for design variables with different length scales or units. In this study we evaluate a recently- developed version of the particle swarm optimization (PSO) algorithm to address these problems. The algorithm’s global search capabilities were investigated using a suite of difficult analytical test problems, while its scale-independent nature was proven mathematically and verified using a biomechanical test problem. For comparison, all test problems were also solved with three off-the-shelf optimization algorithms—a global genetic algorithm (GA) and multistart gradient-based sequential quadratic programming (SQP) and quasi-Newton (BFGS) algorithms. For the analytical test problems, only the PSO algorithm was successful on the majority of the problems. When compared to previously published results for the same problems, PSO was more robust than a global simulated annealing algorithm but less robust than a different, more complex genetic algorithm. For the biomechanical test problem, only the PSO algorithm was insensitive to design variable scaling, with the GA algorithm being mildly sensitive and the SQP and BFGS algorithms being highly sensitive. The proposed PSO algorithm provides a new off-the-shelf global optimization option for difficult biomechanical problems, especially those utilizing design variables with different length scales or units.
APA, Harvard, Vancouver, ISO, and other styles
18

Nie, Shu Zhi, Yan Hua Zhong, and Ming Hu. "Short-Time Traffic Flow Prediction Method Based on Universal Organic Computing Architecture." Advanced Materials Research 756-759 (September 2013): 2785–89. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.2785.

Full text
Abstract:
Designed a DNA-based genetic algorithm under the universal architecture of organic computing, combined particle swarm optimization algorithm, introduced a crossover operation for the particle location, can interfere with the particles speed, make inert particles escape the local optimum points, enhanced PSO algorithm's ability to get rid of local extreme point. Utilized improved algorithms to train the RBF neural network models, predict short-time traffic flow of a region intelligent traffic control. Simulation and error analysis of experimental results showed that, the designed algorithms can accurately forecast short-time traffic flow of the regional intelligent transportation control, forecasting effects is better, can be effectively applied to actual traffic engineering.
APA, Harvard, Vancouver, ISO, and other styles
19

Huang, Xiabao, Zailin Guan, and Lixi Yang. "An effective hybrid algorithm for multi-objective flexible job-shop scheduling problem." Advances in Mechanical Engineering 10, no. 9 (September 2018): 168781401880144. http://dx.doi.org/10.1177/1687814018801442.

Full text
Abstract:
Genetic algorithm is one of primary algorithms extensively used to address the multi-objective flexible job-shop scheduling problem. However, genetic algorithm converges at a relatively slow speed. By hybridizing genetic algorithm with particle swarm optimization, this article proposes a teaching-and-learning-based hybrid genetic-particle swarm optimization algorithm to address multi-objective flexible job-shop scheduling problem. The proposed algorithm comprises three modules: genetic algorithm, bi-memory learning, and particle swarm optimization. A learning mechanism is incorporated into genetic algorithm, and therefore, during the process of evolution, the offspring in genetic algorithm can learn the characteristics of elite chromosomes from the bi-memory learning. For solving multi-objective flexible job-shop scheduling problem, this study proposes a discrete particle swarm optimization algorithm. The population is partitioned into two subpopulations for genetic algorithm module and particle swarm optimization module. These two algorithms simultaneously search for solutions in their own subpopulations and exchange the information between these two subpopulations, such that both algorithms can complement each other with advantages. The proposed algorithm is evaluated on some instances, and experimental results demonstrate that the proposed algorithm is an effective method for multi-objective flexible job-shop scheduling problem.
APA, Harvard, Vancouver, ISO, and other styles
20

CHEN, LEI, and HAI-LIN LIU. "A REGION DECOMPOSITION-BASED MULTI-OBJECTIVE PARTICLE SWARM OPTIMIZATION ALGORITHM." International Journal of Pattern Recognition and Artificial Intelligence 28, no. 08 (December 2014): 1459009. http://dx.doi.org/10.1142/s0218001414590095.

Full text
Abstract:
In this paper, a novel multi-objective particle swarm optimization algorithm based on MOEA/D-M2M decomposition strategy (MOPSO-M2M) is proposed. MOPSO-M2M can decompose the objective space into a number of subregions and then search all the subregions using respective sub-swarms simultaneously. The M2M decomposition strategy has two very desirable properties with regard to MOPSO. First, it facilitates the determination of the global best (gbest) for each sub-swarm. A new global attraction strategy based on M2M decomposition framework is proposed to guide the flight of particles by setting an archive set which is used to store the historical best solutions found by the swarm. When we determine the gbest for each particle, the archive set is decomposed and associated with each sub-swarm. Therefore, every sub-swarm has its own archive subset and the gbest of the particle in a sub-swarm is selected randomly in its archive subset. The new global attraction strategy yields a more reasonable gbest selection mechanism, which can be more effective to guide the particles to the Pareto Front (PF). This strategy can ensure that each sub-swarm searches its own subregion so as to improve the search efficiency. Second, it has a good ability to maintain the diversity of the population which is desirable in multi-objective optimization. Additionally, MOPSO-M2M applies the Tchebycheff approach to determine the personal best position (pbest) and no additional clustering or niching technique is needed in this algorithm. In order to demonstrate the performance of the proposed algorithm, we compare it with two other algorithms: MOPSO and DMS-MO-PSO. The experimental results indicate the validity of this method.
APA, Harvard, Vancouver, ISO, and other styles
21

Song, Ming Li. "A Study of Single-Objective Particle Swarm Optimization and Multi-Objective Particle Swarm Optimization." Applied Mechanics and Materials 543-547 (March 2014): 1635–38. http://dx.doi.org/10.4028/www.scientific.net/amm.543-547.1635.

Full text
Abstract:
The complexity of optimization problems encountered in various modeling algorithms makes a selection of a proper optimization vehicle crucial. Developments in particle swarm algorithm since its origin along with its benefits and drawbacks are mainly discussed as particle swarm optimization provides a simple realization mechanism and high convergence speed. We discuss several developments for single-objective case problem and multi-objective case problem.
APA, Harvard, Vancouver, ISO, and other styles
22

Wang, Haiyan, and Zhiyu Zhou. "A Heuristic Elastic Particle Swarm Optimization Algorithm for Robot Path Planning." Information 10, no. 3 (March 6, 2019): 99. http://dx.doi.org/10.3390/info10030099.

Full text
Abstract:
Path planning, as the core of navigation control for mobile robots, has become the focus of research in the field of mobile robots. Various path planning algorithms have been recently proposed. In this paper, in view of the advantages and disadvantages of different path planning algorithms, a heuristic elastic particle swarm algorithm is proposed. Using the path planned by the A* algorithm in a large-scale grid for global guidance, the elastic particle swarm optimization algorithm uses a shrinking operation to determine the globally optimal path formed by locally optimal nodes so that the particles can converge to it rapidly. Furthermore, in the iterative process, the diversity of the particles is ensured by a rebound operation. Computer simulation and real experimental results show that the proposed algorithm not only overcomes the shortcomings of the A* algorithm, which cannot yield the shortest path, but also avoids the problem of failure to converge to the globally optimal path, owing to a lack of heuristic information. Additionally, the proposed algorithm maintains the simplicity and high efficiency of both the algorithms.
APA, Harvard, Vancouver, ISO, and other styles
23

Ma, Zi Rui. "Particle Swarm Optimization Based on Multiobjective Optimization." Applied Mechanics and Materials 263-266 (December 2012): 2146–49. http://dx.doi.org/10.4028/www.scientific.net/amm.263-266.2146.

Full text
Abstract:
PSO will population each individual as the search space without a volume and quality of particle. These particles in the search space at a certain speed flight, the speed according to its own flight experience and the entire population of flight experience dynamic adjustment. We describe the standard PSO, multi-objective optimization and MOPSO. The main focus of this thesis is several PSO algorithms which are introduced in detail and studied. MOPSO algorithm introduced adaptive grid mechanism of the external population, not only to groups of particle on variation, but also to the value scope of the particles and variation, and the variation scale and population evolution algebra in proportion.
APA, Harvard, Vancouver, ISO, and other styles
24

Hudaib, Amjad A., and Ahmad Kamel AL Hwaitat. "Movement Particle Swarm Optimization Algorithm." Modern Applied Science 12, no. 1 (December 31, 2017): 148. http://dx.doi.org/10.5539/mas.v12n1p148.

Full text
Abstract:
Particle Swarm Optimization (PSO) ia a will known meta-heuristic that has been used in many applications for solving optimization problems. But it has some problems such as local minima. In this paper proposed a optimization algorithm called Movement Particle Swarm Optimization (MPSO) that enhances the behavior of PSO by using a random movement function to search for more points in the search space. The meta-heuristic has been experimented over 23 benchmark faction compared with state of the art algorithms: Multi-Verse Optimizer (MFO), Sine Cosine Algorithm (SCA), Grey Wolf Optimizer (GWO) and particle Swarm Optimization (PSO). The Results showed that the proposed algorithm has enhanced the PSO over the tested benchmarked functions.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhang, Guan Yu, Xiao Ming Wang, Rui Guo, and Guo Qiang Wang. "An Improved Particle Swarm Optimization Algorithm." Applied Mechanics and Materials 394 (September 2013): 505–8. http://dx.doi.org/10.4028/www.scientific.net/amm.394.505.

Full text
Abstract:
This paper presents an improved particle swarm optimization (PSO) algorithm based on genetic algorithm (GA) and Tabu algorithm. The improved PSO algorithm adds the characteristics of genetic, mutation, and tabu search into the standard PSO to help it overcome the weaknesses of falling into the local optimum and avoids the repeat of the optimum path. By contrasting the improved and standard PSO algorithms through testing classic functions, the improved PSO is found to have better global search characteristics.
APA, Harvard, Vancouver, ISO, and other styles
26

Ajeil, Fatin Hassan, Ibraheem Kasim Ibraheem, Ahmad Taher Azar, and Amjad J. Humaidi. "Autonomous navigation and obstacle avoidance of an omnidirectional mobile robot using swarm optimization and sensors deployment." International Journal of Advanced Robotic Systems 17, no. 3 (May 1, 2020): 172988142092949. http://dx.doi.org/10.1177/1729881420929498.

Full text
Abstract:
The present work deals with the design of intelligent path planning algorithms for a mobile robot in static and dynamic environments based on swarm intelligence optimization. Two modifications are suggested to improve the searching process of the standard bat algorithm with the result of two novel algorithms. The first algorithm is a Modified Frequency Bat algorithm, and the second is a hybridization between the Particle Swarm Optimization with the Modified Frequency Bat algorithm, namely, the Hybrid Particle Swarm Optimization-Modified Frequency Bat algorithm. Both Modified Frequency Bat and Hybrid Particle Swarm Optimization-Modified Frequency Bat algorithms have been integrated with a proposed technique for obstacle detection and avoidance and are applied to different static and dynamic environments using free-space modeling. Moreover, a new procedure is proposed to convert the infeasible solutions suggested via path the proposed swarm-inspired optimization-based path planning algorithm into feasible ones. The simulations are run in MATLAB environment to test the validation of the suggested algorithms. They have shown that the proposed path planning algorithms result in superior performance by finding the shortest and smoothest collision-free path under various static and dynamic scenarios.
APA, Harvard, Vancouver, ISO, and other styles
27

Spears, William M., Derek T. Green, and Diana F. Spears. "Biases in Particle Swarm Optimization." International Journal of Swarm Intelligence Research 1, no. 2 (April 2010): 34–57. http://dx.doi.org/10.4018/jsir.2010040103.

Full text
Abstract:
The most common versions of particle swarm optimization (PSO) algorithms are rotationally variant. It has also been pointed out that PSO algorithms can concentrate particles along paths parallel to the coordinate axes. In this paper, the authors explicitly connect these two observations by showing that the rotational variance is related to the concentration along lines parallel to the coordinate axes. Based on this explicit connection, the authors create fitness functions that are easy or hard for PSO to solve, depending on the rotation of the function.
APA, Harvard, Vancouver, ISO, and other styles
28

Li, Zheng Bo. "Learning-Based Multi-Directional Adaptive PSO." Applied Mechanics and Materials 321-324 (June 2013): 2183–86. http://dx.doi.org/10.4028/www.scientific.net/amm.321-324.2183.

Full text
Abstract:
Particle Swarm Optimization (PSO) is a swarm intelligence algorithm to achieve through competition and collaboration between the particles in the complex search space to find the global optimum. Basic PSO algorithm evolutionary late convergence speed is slow and easy to fall into the shortcomings of local minima, this paper presents a multi-learning particle swarm optimization algorithm, the algorithm particle at the same time to follow their own to find the optimal solution, random optimal solution and the optimal solution for the whole group of other particles with dimensions velocity update discriminate area boundary position optimization updates and small-scale perturbations of the global best position, in order to enhance the algorithm escape from local optima capacity. The test results show that several typical functions: improved particle swarm algorithms significantly improve the global search ability, and can effectively avoid the premature convergence problem. Algorithm so that the relative robustness of the search space position has been significantly improved global optimal solution in high-dimensional optimization problem, suitable for solving similar problems, the calculation results can meet the requirements of practical engineering.
APA, Harvard, Vancouver, ISO, and other styles
29

Xi, Maolong, Xiaojun Wu, Xinyi Sheng, Jun Sun, and Wenbo Xu. "Improved quantum-behaved particle swarm optimization with local search strategy." Journal of Algorithms & Computational Technology 11, no. 1 (July 8, 2016): 3–12. http://dx.doi.org/10.1177/1748301816654020.

Full text
Abstract:
Quantum-behaved particle swarm optimization, which was motivated by analysis of particle swarm optimization and quantum system, has shown compared performance in finding the optimal solutions for many optimization problems to other evolutionary algorithms. To address the problem of premature, a local search strategy is proposed to improve the performance of quantum-behaved particle swarm optimization. In proposed local search strategy, a super particle is presented which is a collection body of randomly selected particles’ dimension information in the swarm. The selected probability of particles in swarm is different and determined by their fitness values. To minimization problems, the fitness value of one particle is smaller; the selected probability is more and will contribute more information in constructing the super particle. In addition, in order to investigate the influence on algorithm performance with different local search space, four methods of computing the local search radius are applied in local search strategy and propose four variants of local search quantum-behaved particle swarm optimization. Empirical studies on a suite of well-known benchmark functions are undertaken in order to make an overall performance comparison among the proposed methods and other quantum-behaved particle swarm optimization. The simulation results show that the proposed quantum-behaved particle swarm optimization variants have better advantages over the original quantum-behaved particle swarm optimization.
APA, Harvard, Vancouver, ISO, and other styles
30

Alatas, Bilal, Erhan Akin, and A. Bedri Ozer. "Chaos embedded particle swarm optimization algorithms." Chaos, Solitons & Fractals 40, no. 4 (May 2009): 1715–34. http://dx.doi.org/10.1016/j.chaos.2007.09.063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Zhou, Fei Hong, and Zi Zhen Liao. "A Particle Swarm Optimization Algorithm." Applied Mechanics and Materials 303-306 (February 2013): 1369–72. http://dx.doi.org/10.4028/www.scientific.net/amm.303-306.1369.

Full text
Abstract:
The basic and improved algorithms of PSO focus on how to effectively search the optimal solution in the solution space using one of the particle swarm. However, the particles are always chasing the global optimal point and such points currently found on their way of search, rapidly leading their speed down to zero and hence being restrained in the local minimum. Consequently, the convergence or early maturity of particles exists. The improved PSO is based on the enlightenment of BP neural network while the improvement is similar to smooth the weight through low-pass filter. The test of classical functions show that the PSO provides a promotion in the convergence precision and calculation velocity to a certain extent.
APA, Harvard, Vancouver, ISO, and other styles
32

Wei, Xiao-peng, Jian-xia Zhang, Dong-sheng Zhou, and Qiang Zhang. "Multiswarm Particle Swarm Optimization with Transfer of the Best Particle." Computational Intelligence and Neuroscience 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/904713.

Full text
Abstract:
We propose an improved algorithm, for a multiswarm particle swarm optimization with transfer of the best particle called BMPSO. In the proposed algorithm, we introduce parasitism into the standard particle swarm algorithm (PSO) in order to balance exploration and exploitation, as well as enhancing the capacity for global search to solve nonlinear optimization problems. First, the best particle guides other particles to prevent them from being trapped by local optima. We provide a detailed description of BMPSO. We also present a diversity analysis of the proposed BMPSO, which is explained based on the Sphere function. Finally, we tested the performance of the proposed algorithm with six standard test functions and an engineering problem. Compared with some other algorithms, the results showed that the proposed BMPSO performed better when applied to the test functions and the engineering problem. Furthermore, the proposed BMPSO can be applied to other nonlinear optimization problems.
APA, Harvard, Vancouver, ISO, and other styles
33

Song, Yanqing, and Genran Hou. "Research on Optimization of Project Time-Cost-Quality Based on Particle Swarm Optimization." International Journal of Information Systems and Supply Chain Management 12, no. 2 (April 2019): 76–88. http://dx.doi.org/10.4018/ijisscm.2019040106.

Full text
Abstract:
In order to make proper time-cost-quality decisions for projects, an improved particle swarm optimization algorithm is applied. First, the optimal model of project time-cost-quality is constructed considering all factors. Second, the basic theory of particle swarm algorithms is summarized, and the improved particle swarm algorithm is put forward based on vector principle, and then the rotational base technology is introduced into the improved particle swarm algorithm to construct a multiple objective optimization algorithm. Finally, the simulation analysis is carried out using a project as example, and the optimal parameters are obtained.
APA, Harvard, Vancouver, ISO, and other styles
34

Dioşan, Laura, and Mihai Oltean. "What Else Is the Evolution of PSO Telling Us?" Journal of Artificial Evolution and Applications 2008 (January 28, 2008): 1–12. http://dx.doi.org/10.1155/2008/289564.

Full text
Abstract:
Evolutionary algorithms (EAs) can be used in order to design particle swarm optimization (PSO) algorithms that work, in some cases, considerably better than the human-designed ones. By analyzing the evolutionary process of designing PSO algorithms, we can identify different swarm phenomena (such as patterns or rules) that can give us deep insights about the swarm behavior. The rules that have been observed can help us design better PSO algorithms for optimization. We investigate and analyze swarm phenomena by looking into the process of evolving PSO algorithms. Several test problems have been analyzed in the experiments and interesting facts can be inferred from the strategy evolution process (the particle quality could influence the update order, some particles are updated more frequently than others, the initial swarm size is not always optimal).
APA, Harvard, Vancouver, ISO, and other styles
35

Zhang, Chun Yan, and Wei Chen. "Quantum-Behaved Particle Swarm Optimization Dynamic Clustering Algorithm." Advanced Materials Research 694-697 (May 2013): 2757–60. http://dx.doi.org/10.4028/www.scientific.net/amr.694-697.2757.

Full text
Abstract:
This paper proposed a revised quantum-behaved particle swarm optimization algorithm utilizing comprehensive learning strategy to prevent the universal tendency of premature convergence, based on which introduced a novel data clustering algorithm as well. The optimal number of cluster could be automatically obtained by this novel clustering algorithm because a new special coding method for particles was used. Compared with another two dynamic clustering algorithms on five testing data sets, the proposed dynamic clustering algorithm based on the comprehensive learning strategy has the best performance and with the best potential application prospect.
APA, Harvard, Vancouver, ISO, and other styles
36

Ahmad, Yasir, Mohib Ullah, Rafiullah Khan, Bushra Shafi, Atif Khan, Mahdi Zareei, Abdallah Aldosary, and Ehab Mahmoud Mohamed. "SiFSO: Fish Swarm Optimization-Based Technique for Efficient Community Detection in Complex Networks." Complexity 2020 (December 12, 2020): 1–9. http://dx.doi.org/10.1155/2020/6695032.

Full text
Abstract:
Efficient community detection in a complex network is considered an interesting issue due to its vast applications in many prevailing areas such as biology, chemistry, linguistics, social sciences, and others. There are several algorithms available for network community detection. This study proposed the Sigmoid Fish Swarm Optimization (SiFSO) algorithm to discover efficient network communities. Our proposed algorithm uses the sigmoid function for various fish moves in a swarm, including Prey, Follow, Swarm, and Free Move, for better movement and community detection. The proposed SiFSO algorithm’s performance is tested against state-of-the-art particle swarm optimization (PSO) algorithms in Q-modularity and normalized mutual information (NMI). The results showed that the proposed SiFSO algorithm is 0.0014% better in terms of Q-modularity and 0.1187% better in terms of NMI than the other selected algorithms.
APA, Harvard, Vancouver, ISO, and other styles
37

Fan, Jin Wei, Qin Mei, and Xiao Feng Wang. "Robust PID Parameters Optimization Design Based on Improved Particle Swarm Optimization." Applied Mechanics and Materials 373-375 (August 2013): 1125–30. http://dx.doi.org/10.4028/www.scientific.net/amm.373-375.1125.

Full text
Abstract:
The article, based on satisfying robustness of the system and put forward the objective function of time-domain performance and dynamic characteristics, introduced genetic operators into Particle Swarm Optimization. The algorithm improve the diversity of particles by selection and hybridization operations and strengthen the excellent characteristics of particles in the swarm by introducing crossover and mutation genes, which can avoid bog down into local optima and premature convergence and enhance searching efficiency. The simulation results indicate that when the algorithm is applied to the optimization of PID controller parameters of servo system of grinding wheel rack of MKS8332A CNC camshaft grinder, its performance is better than the single Genetic Algorithms or Particle Swarm Optimization, and it can also satisfy the demand of rapidity, stability and robustness.
APA, Harvard, Vancouver, ISO, and other styles
38

Yong, Wang, Wang Tao, Zhang Cheng-Zhi, and Huang Hua-Juan. "A New Stochastic Optimization Approach — Dolphin Swarm Optimization Algorithm." International Journal of Computational Intelligence and Applications 15, no. 02 (June 2016): 1650011. http://dx.doi.org/10.1142/s1469026816500115.

Full text
Abstract:
A novel nature-inspired swarm intelligence (SI) optimization is proposed called dolphin swarm optimization algorithm (DSOA), which is based on mimicking the mechanism of dolphins in detecting, chasing after, and preying on swarms of sardines to perform optimization. In order to test the performance, the DSOA is evaluated against the corresponding results of three existing well-known SI optimization algorithms, namely, particle swarm optimization (PSO), bat algorithm (BA), and artificial bee colony (ABC), in the terms of the ability to find the global optimum of a range of the popular benchmark functions. The experimental results show that the proposed optimization seems superior to the other three algorithms, and the proposed algorithm has the performance of fast convergence rate, and high local optimal avoidance.
APA, Harvard, Vancouver, ISO, and other styles
39

Freitas Vaz, António Ismael de, and Edite Manuela da Graça Pinto Fernandes. "OPTIMIZATION OF NONLINEAR CONSTRAINED PARTICLE SWARM." Technological and Economic Development of Economy 12, no. 1 (March 31, 2006): 30–36. http://dx.doi.org/10.3846/13928619.2006.9637719.

Full text
Abstract:
We propose an algorithm based on the particle swarm paradigm (PSP) to address nonlinear constrained optimization problems. While some algorithms based on PSP have already been proposed in this context, the equality constraints have been posing some difficulties. The proposed algorithm is based on the relaxation of the dominance concept introduced in the multiobjective optimization. This concept is used to select the best particle position and the best ever particle swarm position. We propose also a stopping criterion for the algorithm and present numerical results with some problems collected from the literature. The new algorithm is implemented in a solver connected with AMPL, allowing easy coding and solving of problems.
APA, Harvard, Vancouver, ISO, and other styles
40

Yong, Dong, Wu Chuansheng, and Guo Haimin. "Particle Swarm Optimization Algorithm with Adaptive Chaos Perturbation." Cybernetics and Information Technologies 15, no. 6 (December 1, 2015): 70–80. http://dx.doi.org/10.1515/cait-2015-0068.

Full text
Abstract:
Abstract Most of the existing chaotic particle swarm optimization algorithms use logistic chaotic mapping. However, the chaotic sequence which is generated by the logistic chaotic mapping is not uniform enough. As a solution to this defect, this paper introduces the Anderson chaotic mapping to the chaotic particle swarm optimization, using it to initialize the position and velocity of the particle swarm. It self-adaptively controls the portion of particles to undergo chaos update through a change of the fitness variance. The numerical simulation results show that the convergence and global searching capability of the modified algorithm have been improved with the introduction of this mapping and it can efficiently avoid premature convergence.
APA, Harvard, Vancouver, ISO, and other styles
41

Madhumala, R. B., Harshvardhan Tiwari, and Verma C. Devaraj. "Virtual Machine Placement Using Energy Efficient Particle Swarm Optimization in Cloud Datacenter." Cybernetics and Information Technologies 21, no. 1 (March 1, 2021): 62–72. http://dx.doi.org/10.2478/cait-2021-0005.

Full text
Abstract:
Abstract Efficient resource allocation through Virtual machine placement in a cloud datacenter is an ever-growing demand. Different Virtual Machine optimization techniques are constructed for different optimization problems. Particle Swam Optimization (PSO) Algorithm is one of the optimization techniques to solve the multidimensional virtual machine placement problem. In the algorithm being proposed we use the combination of Modified First Fit Decreasing Algorithm (MFFD) with Particle Swarm Optimization Algorithm, used to solve the best Virtual Machine packing in active Physical Machines to reduce energy consumption; we first screen all Physical Machines for possible accommodation in each Physical Machine and then the Modified Particle Swam Optimization (MPSO) Algorithm is used to get the best fit solution.. In our paper, we discuss how to improve the efficiency of Particle Swarm Intelligence by adapting the efficient mechanism being proposed. The obtained result shows that the proposed algorithm provides an optimized solution compared to the existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
42

Passaro, Alessandro, and Antonina Starita. "Particle Swarm Optimization for Multimodal Functions: A Clustering Approach." Journal of Artificial Evolution and Applications 2008 (April 24, 2008): 1–15. http://dx.doi.org/10.1155/2008/482032.

Full text
Abstract:
The particle swarm optimization (PSO) algorithm is designed to find a single optimal solution and needs some modifications to be able to locate multiple optima on a multimodal function. In parallel with evolutionary computation algorithms, these modifications can be grouped in the framework of niching. In this work, we present a new approach to niching in PSO based on clustering particles to identify niches. The neighborhood structure, on which particles rely for communication, is exploited together with the niche information to locate multiple optima in parallel. Our approach was implemented in the k-means-based PSO (kPSO), which employs the standard k-means clustering algorithm, improved with a mechanism to adaptively identify the number of clusters. kPSO proved to be a competitive solution when compared with other existing algorithms, since it showed better performance on a benchmark set of multimodal functions.
APA, Harvard, Vancouver, ISO, and other styles
43

Lenin, K. "DIMENSIONED PARTICLE SWARM OPTIMIZATION FOR REACTIVE POWER OPTIMIZATION PROBLEM." International Journal of Research -GRANTHAALAYAH 6, no. 4 (April 30, 2018): 281–90. http://dx.doi.org/10.29121/granthaalayah.v6.i4.2018.1663.

Full text
Abstract:
This paper present’s Dimensioned Particle Swarm Optimization (DPSO) algorithm for solving Reactive power optimization (RPO) problem. Dimensioned extension is introduced to particles in the particle swarm optimization (PSO) model in order to overcome premature convergence in interactive optimization. In the performance of basic PSO often flattens out with a loss of diversity in the search space as resulting in local optimal solution. Proposed algorithm has been tested in standard IEEE 57 test bus system and compared to other standard algorithms. Simulation results reveal about the best performance of the proposed algorithm in reducing the real power loss and voltage profiles are within the limits.
APA, Harvard, Vancouver, ISO, and other styles
44

Boursianis, Achilles D., Maria S. Papadopoulou, Marco Salucci, Alessandro Polo, Panagiotis Sarigiannidis, Konstantinos Psannis, Seyedali Mirjalili, Stavros Koulouridis, and Sotirios K. Goudos. "Emerging Swarm Intelligence Algorithms and Their Applications in Antenna Design: The GWO, WOA, and SSA Optimizers." Applied Sciences 11, no. 18 (September 8, 2021): 8330. http://dx.doi.org/10.3390/app11188330.

Full text
Abstract:
Swarm Intelligence (SI) Algorithms imitate the collective behavior of various swarms or groups in nature. In this work, three representative examples of SI algorithms have been selected and thoroughly described, namely the Grey Wolf Optimizer (GWO), the Whale Optimization Algorithm (WOA), and the Salp Swarm Algorithm (SSA). Firstly, the selected SI algorithms are reviewed in the literature, specifically for optimization problems in antenna design. Secondly, a comparative study is performed against widely known test functions. Thirdly, such SI algorithms are applied to the synthesis of linear antenna arrays for optimizing the peak sidelobe level (pSLL). Numerical tests show that the WOA outperforms the GWO and the SSA algorithms, as well as the well-known Particle Swarm Optimizer (PSO), in terms of average ranking. Finally, the WOA is exploited for solving a more computational complex problem concerned with the synthesis of an dual-band aperture-coupled E-shaped antenna operating in the 5G frequency bands.
APA, Harvard, Vancouver, ISO, and other styles
45

Xiao, Bin, and Zhao Hui Li. "An Improved Hybrid Discrete Particle Swarm Optimization Algorithm to Solve the TSP Problem." Applied Mechanics and Materials 130-134 (October 2011): 3589–94. http://dx.doi.org/10.4028/www.scientific.net/amm.130-134.3589.

Full text
Abstract:
Through investigating the issue of solving the TSP problem by discrete particle swarm optimization algorithm, this study finds a new discrete particle swarm optimization algorithm (NDPSO), which is easy to combine with other algorithm and has fast convergence and high accuracy, by introducing the thought of the greedy algorithm and GA algorithm and refining the discrete particle swarm optimization algorithm. And then the study expands NDPSO by Simulated Annealing algorithm and proposes a hybrid discrete particle swarm optimization algorithm (HDPSO). At last, the experiments prove that these two algorithms both have good convergence, but the HDPSO has a better capacity to find the best solution.
APA, Harvard, Vancouver, ISO, and other styles
46

Wang, Lei, and Yongqiang Liu. "Application of Simulated Annealing Particle Swarm Optimization Based on Correlation in Parameter Identification of Induction Motor." Mathematical Problems in Engineering 2018 (July 8, 2018): 1–9. http://dx.doi.org/10.1155/2018/1869232.

Full text
Abstract:
The strengths and weaknesses of correlation algorithm, simulated annealing algorithm, and particle swarm optimization algorithm are studied in this paper. A hybrid optimization algorithm is proposed by drawing upon the three algorithms, and the specific application processes are given. To extract the current fundamental signal, the correlation algorithm is used. To identify the motor dynamic parameter, the filtered stator current signal is simulated using simulated annealing particle swarm algorithm. The simulated annealing particle swarm optimization algorithm effectively incorporates the global optimization ability of simulated annealing algorithm with the fast convergence of particle swarm optimization by comparing the identification results of asynchronous motor with constant torque load and step load.
APA, Harvard, Vancouver, ISO, and other styles
47

Liu, Ruochen, Chenlin Ma, Wenping Ma, and Yangyang Li. "A Multipopulation PSO Based Memetic Algorithm for Permutation Flow Shop Scheduling." Scientific World Journal 2013 (2013): 1–11. http://dx.doi.org/10.1155/2013/387194.

Full text
Abstract:
The permutation flow shop scheduling problem (PFSSP) is part of production scheduling, which belongs to the hardest combinatorial optimization problem. In this paper, a multipopulation particle swarm optimization (PSO) based memetic algorithm (MPSOMA) is proposed in this paper. In the proposed algorithm, the whole particle swarm population is divided into three subpopulations in which each particle evolves itself by the standard PSO and then updates each subpopulation by using different local search schemes such as variable neighborhood search (VNS) and individual improvement scheme (IIS). Then, the best particle of each subpopulation is selected to construct a probabilistic model by using estimation of distribution algorithm (EDA) and three particles are sampled from the probabilistic model to update the worst individual in each subpopulation. The best particle in the entire particle swarm is used to update the global optimal solution. The proposed MPSOMA is compared with two recently proposed algorithms, namely, PSO based memetic algorithm (PSOMA) and hybrid particle swarm optimization with estimation of distribution algorithm (PSOEDA), on 29 well-known PFFSPs taken from OR-library, and the experimental results show that it is an effective approach for the PFFSP.
APA, Harvard, Vancouver, ISO, and other styles
48

Patel G C, Manjunath, Prasad Krishna, Mahesh B. Parappagoudar, and Pandu Ranga Vundavilli. "Multi-Objective Optimization of Squeeze Casting Process using Evolutionary Algorithms." International Journal of Swarm Intelligence Research 7, no. 1 (January 2016): 55–74. http://dx.doi.org/10.4018/ijsir.2016010103.

Full text
Abstract:
The present work focuses on determining optimum squeeze casting process parameters using evolutionary algorithms. Evolutionary algorithms, such as genetic algorithm, particle swarm optimization, and multi objective particle swarm optimization based on crowing distance mechanism, have been used to determine the process variable combinations for the multiple objective functions. In multi-objective optimization, there are no single optimal process variable combination due to conflicting nature of objective functions. Four cases have been considered after assigning different combination of weights to the individual objective function based on the user importance. Confirmation tests have been conducted for the recommended process variable combinations obtained by genetic algorithm (GA), particle swarm optimization (PSO), and multiple objective particle swarm optimization based on crowing distance (MOPSO-CD). The performance of PSO is found to be comparable with that of GA for identifying optimal process variable combinations. However, PSO outperformed GA with regard to computation time.
APA, Harvard, Vancouver, ISO, and other styles
49

Yang, Wusi, Li Chen, Yi Wang, and Maosheng Zhang. "Multi/Many-Objective Particle Swarm Optimization Algorithm Based on Competition Mechanism." Computational Intelligence and Neuroscience 2020 (February 20, 2020): 1–26. http://dx.doi.org/10.1155/2020/5132803.

Full text
Abstract:
The recently proposed multiobjective particle swarm optimization algorithm based on competition mechanism algorithm cannot effectively deal with many-objective optimization problems, which is characterized by relatively poor convergence and diversity, and long computing runtime. In this paper, a novel multi/many-objective particle swarm optimization algorithm based on competition mechanism is proposed, which maintains population diversity by the maximum and minimum angle between ordinary and extreme individuals. And the recently proposed θ-dominance is adopted to further enhance the performance of the algorithm. The proposed algorithm is evaluated on the standard benchmark problems DTLZ, WFG, and UF1-9 and compared with the four recently proposed multiobjective particle swarm optimization algorithms and four state-of-the-art many-objective evolutionary optimization algorithms. The experimental results indicate that the proposed algorithm has better convergence and diversity, and its performance is superior to other comparative algorithms on most test instances.
APA, Harvard, Vancouver, ISO, and other styles
50

Mohapatra, Prabhujit, Kedar Nath Das, Santanu Roy, Ram Kumar, and Nilanjan Dey. "A Novel Multi-Objective Competitive Swarm Optimization Algorithm." International Journal of Applied Metaheuristic Computing 11, no. 4 (October 2020): 114–29. http://dx.doi.org/10.4018/ijamc.2020100106.

Full text
Abstract:
In this article, a new algorithm, namely the multi-objective competitive swarm optimizer (MOCSO), is introduced to handle multi-objective problems. The algorithm has been principally motivated from the competitive swarm optimizer (CSO) and the NSGA-II algorithm. In MOCSO, a pair wise competitive scenario is presented to achieve the dominance relationship between two particles in the population. In each pair wise competition, the particle that dominates the other particle is considered the winner and the other is consigned as the loser. The loser particles learn from the respective winner particles in each individual competition. The inspired CSO algorithm does not use any memory to remember the global best or personal best particles, hence, MOCSO does not need any external archive to store elite particles. The experimental results and statistical tests confirm the superiority of MOCSO over several state-of-the-art multi-objective algorithms in solving benchmark problems.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography