_{1}

^{*}

Updating the velocity in particle swarm optimization (PSO) consists of three terms: the inertia term, the cognitive term and the social term. The balance of these terms determines the balance of the global and local search abilities, and therefore the performance of PSO. In this work, an adaptive parallel PSO algorithm, which is based on the dynamic exchange of control parameters between adjacent swarms, has been developed. The proposed PSO algorithm enables us to adaptively optimize inertia factors, learning factors and swarm activity. By performing simulations of a search for the global minimum of a benchmark multimodal function, we have found that the proposed PSO successfully provides appropriate control parameter values, and thus good global optimization performance.

In the various aspects of optimization, there are cases where a globally optimal solution is not necessarily obtainable. In such cases, it is desirable to find instead a semi-optimal solution that can be computed within a practical timeframe. To achieve this goal, heuristic optimization techniques are popularly studied and used, typified by genetic algorithms (GA), simulated annealing (SA) and particle swarm optimization [

If the objective function under consideration is multimodal, then heuristic optimization techniques are desired to have qualities including a global solution search ability, maintained by preservation of solution diversity; a local solution search ability, maintained conversely by centralization of the solution search; and a balance between these two. Solution diversification and centralization strategies are factors universally shared by heuristic optimization techniques, and influence their performance. However, there are few precise and universal guidelines for configuring the values of the parameters that control these strategies: their configuration is problem-specific. Additionally, tuning of these parameters is not simple, and generally requires many preliminary calculations. Furthermore, which parameter values are suitable may vary at every stage of the solution search; pertinent examples include the configuration of crossover rate and spontaneous mutation rate in GAs and the temperature cooling schedule in SA.

Focusing on PSO, a kind of multipoint search heuristic optimization technique, in this study we propose several parallel PSO algorithms in which control parameters are dynamically exchanged between a number of swarms and are adaptively adjusted during the solution search process. We also share our findings from an evaluation of algorithm performance on a minimum search problem for a multimodal objective function.

PSO is an evolutionary optimization calculation technique based on the concept of swarm intelligence. In PSO, the hypersurface of an objective function is searched as information is exchanged between swarms of search points, which simulate animals or insects. The next state of each individual is generated based on the optimal solution in its search history (“personal best”; pbest), the optimal solution in the combined search history of all individuals in the swarm (“global best”; gbest), and the current velocity vector. Briefly, assuming a population size N_{p} and problem dimension N_{d}, the position and velocity of an individual i (where^{th} step of the search, respectively

These two variables can be updated by means of the following equation, using the position and velocity at the t^{th} step,

Here, ^{th} step by individual i itself, while ^{th} step by the swarm to which individual i belongs. The term _{1} and c_{2} are weighting factors, respectively called the cognitive learning factor and the social learning factor (learning factors); and rand1 and rand2 are uniform random numbers in [0, 1]. The PSO solution search procedure is described below:

1. Decide population size and maximum number of search steps.

2. Set initial position and velocity of each individual.

3. Calculate objective function value for each individual.

4. Determine the optimal individual solution pbest for each individual and the optimal swarm solution gbest, and update these values.

5. Update the position and velocity of each individual according to Equation (3) and (4).

6. End search if desired solution accuracy is obtained or if maximum number of steps is reached.

The algorithm behind PSO is simpler than a GA, another multipoint search heuristic optimization technique, making it easier to code and tending to lead to faster solution convergence. On the other hand, PSO sometimes loses solution diversity during the search, which readily invites excessive convergence. In response, improved PSO techniques have begun to be proposed around the world. Examples include distributed PSO and hierarchical PSO, which search the solution space with multiple different swarms [

In this study, we focus on several parameters to control diversification and centralization in solution search: the inertia factor_{1} and c_{2}, and swarm activity (described in Section 3.3). Introducing concepts similar to those employed in the replica-exchange method [

The replica-exchange method was developed in response to problems like spin glass and protein folding, in which it is difficult to find the ground-energy state (a global optimum solution) because several semi-stable states (local optimum solutions) exist in the system. In the replica-exchange method, several replicas of the original system are prepared, which have different temperatures and never interact with each other. We encourage readers to imagine “temperature” here as the temperature parameter in Metropolis Monte Carlo simulations, i.e., it indicates the degree to which deterioration is permitted when making the decision to transition to a candidate in the next state. Solution searches in high-temperature systems exhibit behavior close to a random search, whereas solution searches in low-temperature systems exhibit behavior close to the steepest descent method. Solution search calculations are run independently and simultaneously for each replica, each at its respective constant temperature. At the same time, temperature is exchanged periodically after a certain number of search steps according to the exchange probability w in the following equations between a given replica pair (with respective states _{k} and T_{k}_{+1}).

Here, E(X) represents the energy of a replica at state X (i.e., the objective function value).

Here, we first propose a technique focusing on the inertia factor

whereas those of individuals with small _{1} and c_{2}). Thus, efficient optimization should be achievable if, in the initial search, individuals are given large

Here, _{max} is the maximum number of search steps. Note that there is no single optimal reduction schedule one can choose for inertia factor

We consider N_{s} swarms with various different

In IP-PSO, each swarm has its own gbest, the optimal solution found across all individuals in the swarm. Periodically, after a certain number of search steps, the objective function f(gbest) values of two swarms (

cause a larger

1. Decide total population size, number of swarms, and maximum number of search steps.

2. Assign initial inertia factor values to each swarm according to Equation (8).

3. Set initial position and velocity of each individual.

4. Calculate objective function value for each individual.

5. Determine the optimal individual solution pbest for each individual and the optimal swarm solution gbest, and update these values.

6. Update the position and velocity of each individual according to Equation (3) and (4).

7. Periodically, after a certain number of search steps, compare objective function values between two swarms having adjacent inertia factor values; make the decision to exchange inertia factors according to Equations (9) and (10).

8. End search if desired solution accuracy is obtained or if maximum number of search steps is reached.

Because each swarm can be simulated independently and simultaneously with only a slight communication cost, the IP-PSO (and also the other proposed adaptive parallel PSOs described in Sections 3.2 - 3.4) are well suited for and very efficiently runs on massively parallel computers.

Here, we propose a technique focusing on learning factors c_{1} and c_{2}, control parameters in Equation (3). For individuals with relatively large c_{1}, PSO searches in the vicinity of the optimal solution in that individual’s search history, pbest, whereas for individuals with large c_{2}, it searches in the vicinity of the optimal solution in the search history of the swarm, gbest. Thus, efficient optimization should be realizable if, in the initial search, individuals are given large c_{1} and small c_{2} values to ensure solution diversity, while in its final stages, individuals are instead given small c_{1} and large c_{2} values in an attempt to centralize the search in the vicinity of gbest. Some time-change schedules have learning factors c_{1} and c_{2} decrease (or increase) linearly with increasing number of search steps [

For the Learning-factor Parallel PSO (LP-PSO) proposed in this section, we introduce the allocation parameter_{1} and c_{2}, and define the LP-PSO learning factors

Here, c_{0} is a constant. This paper uses c_{0} = 1.4955, a learning factor determined to be stable in a PSO stability analysis run by Clerc et al. [_{s} swarms having various different

Thereafter, as in IP-PSO, each swarm has its own gbest, the optimal solution found across all individuals in the swarm. Periodically, after a certain number of search steps, the objective function f(gbest) values of two swarms having adjacent

Here, we propose a technique focusing on the control parameter for swarm activity. Yasuda et al. [

Swarms with high activity have many individuals with high velocities, and search over a wide solution space. Swarms with low activity, on the other hand, have many individuals with low velocities, and so search intensively for local solutions. Activity is observed moment-to-moment, because searches in which activity decreases gradually and continually can yield favorable solutions. In the event that measured activity is lower than the preset baseline activity, increasing the inertia factor

For the Activity Parallel PSO (AP-PSO) proposed in this section, we directly control swarm activity (i.e., what was measured in [_{0}. Briefly, the scaling factor s is calculated according to the following equation, where the velocity v_{i} of each individual is converted to

We consider N_{s} swarms controlled by various different target activity Act_{0} values. We determine Act_{0,k} (

Thereafter, as in IP-PSO and LP-PSO, each swarm has its own gbest, the optimal solution found across all individuals in the population. Periodically, after a certain number of search steps, the objective function f(gbest) values of two swarms having adjacent Act_{0,k} values are compared. Each Act_{0,k} value is then probabilistically exchanged (or not) according to the Metropolis decisions in the same manner as Equations (9) and (10). The decision will assign smaller Act_{0,k} value to the swarm having the superior f(gbest) value with higher probability. As a result, a more-intensive search can be performed in the vicinity of gbest. On the other hand, it is also possible to escape local optimum solutions using a global search, because larger Act_{0,k} value is assigned to the swarm having the inferior f(gbest) value with higher-probability. Assigning an appropriate activity value to each swarm according to the search conditions makes it unnecessary to configure an activity reduction schedule in advance.

The proposed PSO techniques in Sections 3.1 - 3.3 above focus on only one kind of control parameter at a time, and assign parameter values that differ between each swarm. Nonetheless, adaptive control is possible if several control parameters are simultaneously and dynamically exchanged between swarms. For example, we can consider N_{s} swarms each having a different inertia factor _{0}: we call this technique the Inertia-factor and Activity Parallel PSO (IAP-PSO). For IP-PSO in Section 3.1, a given

We evaluate the performance of the proposed techniques using a minimum search problem for a Rastrigin function, a representative multimodal function. Our Rastrigin function is represented by the following equation:

The Rastrigin function is multimodal, its variables are completely independent of each other, and it has a minimum value of _{d} = 100, and the initial coordinates and initial velocity of each individual were set according to uniform random numbers in the respective ranges

We first evaluated the proposed PSO techniques, in which only a single control parameter is exchanged in the search process. We observed the relationship between successful transitions in control parameter value and changes in objective function value, and compared its performance with other techniques. Specifically, we compared a Linearly decreasing Inertia factor PSO (LDI-PSO), in which the inertia factor linearly and continually decreases according to Equation (7) with increasing search step t, with the proposed techniques IP-PSO, LP-PSO, and AP-PSO.

LDI-PSO | IP-PSO | LP-PSO | AP-PSO | |
---|---|---|---|---|

Dimension N_{d} | 100 | |||

Total number of individuals N_{p} | 6400 | |||

Number of swarms N_{s} | 1 | 8 | 8 | 8 |

Inertia factor γ = [0.4, 0.9] | Linearly decreasing | Dynamic exchange | Linearly decreasing | Linearly decreasing |

Allocation factor α = [0, 1] | 0.5 | 0.5 | Dynamic exchange | 0.5 |

Target activity Act_{0} = [1, 50] | - | - | - | Dynamic exchange |

Maximum number of steps t_{max} | 3000 |

We next evaluated the proposed PSO techniques in which multiple parameters are exchanged simultaneously. The four techniques compared were (1) linearly decreasing inertia-factor and learning-factor PSO (LDIL-PSO), in which the inertia factor

the allocation parameter for learning factors _{0} both linearly decrease with increasing step number t; and(4) IAP-PSO, in which

LDIL-PSO | ILP-PSO | LDIA-PSO | IAP-PSO | |
---|---|---|---|---|

Dimension N_{d} | 100 | |||

Total number of individuals N_{p} | 6400 | |||

Number of swarms N_{s} | 1 | 8 | 1 | 8 |

Inertia factor γ = [0.4, 0.9] | Linearly decreasing | Dynamic exchange | Linearly decreasing | Dynamic exchange |

Allocation factor α = [0, 1] | Linearly decreasing | Dynamic exchange | 0.5 | 0.5 |

Target activity Act_{0} = [1, 50] | - | - | Linearly decreasing | Dynamic exchange |

Maximum number of steps t_{max} | 3000 |

We proposed five types of adaptive parallel PSO algorithms that employed the dynamic exchange of control parameters between multiple swarms-IP-PSO, LP-PSO, AP-PSO, ILP-PSO and IAP-PSO while focusing on the PSO control parameters of inertia factor, learning factors and swarm activity. The proposed algorithms were adopted to adaptively regulate control parameters at each step of the search in an experiment consisting of a minimum search problem for a multimodal function. The results show the systems transition appropriately between global and local solution search phases, meaning that efficient searches that do not stall at local optimum solutions are possible. Superior objective function values were obtained by ILP-PSO in particular: this method achieves adaptive regulation through the simultaneous exchange of the inertia factor and learning factors. Additional numerical experiments and assessments of the performance characteristics with a larger set of testing pool such as CEC benchmark are important topics for future research.

Suzuki, M. (2016) Adaptive Parallel Particle Swarm Optimization Algorithm Based on Dynamic Exchange of Control Parameters. American Journal of Operations Research, 6, 401- 413. http://dx.doi.org/10.4236/ajor.2016.65037