Open Journal of Applied Sciences
Vol.05 No.06(2015), Article ID:56828,10 pages

Improved Quantum-Behaved Particle Swarm Optimization

Jianping Li

School of Computer and Information Technology, Northeast Petroleum University, Daqing, China


Copyright © 2015 by author and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY).

Received 23 April 2015; accepted 30 May 2015; published 2 June 2015


To enhance the performance of quantum-behaved PSO, some improvements are proposed. First, an encoding method based on the Bloch sphere is presented. In this method, each particle carries three groups of Bloch coordinates of qubits, and these coordinates are actually the approximate solutions. The particles are updated by rotating qubits about an axis on the Bloch sphere, which can simultaneously adjust two parameters of qubits, and can automatically achieve the best matching of two adjustments. The optimization process is employed in the n-dimensional space [-1, 1]n, so this approach fits to many optimization problems. The experimental results show that this algorithm is superior to the original quantum-behaved PSO.


Swarm Intelligence, Particle Swarm Optimization, Quantum Potential Well, Encoding Method

1. Introduction

The particle swarm optimization (PSO) algorithm is a global search strategy that can efficiently handle arbitrary optimization problems. In 1995, Kennedy and Eberhart introduced the PSO method for the first time [1] . Later, it received considerable attention and was shown to be capable of tackling difficult optimization problems. PSO mimics the social interactions between members of biological swarms. A good analogy for illustrating the concept is a swarm of birds. Birds (solution candidates) are allowed to fly in a specified field looking for food. It is believed that after a certain time (generations; iterations) all birds will gather around the highest concentration of food in the field (global optimum). At every generation, each bird updates its current location using information about the local and global optimums having achieved so far, and information received from other birds. These social interactions and continuous updates will guarantee that the global optimum will be found. The method has received considerable international attention because of its simplicity and because of its skill in finding global solutions to hard optimization problems. At present, the classical PSO method has been successfully applied to combinatorial optimization [2] [3] and numerical optimization [4] [5] . The following improvements have been applied to the classical PSO technique: modification of design parameters [6] - [8] , modification of the update rule of a particle’s location and velocity [9] [10] , integration with other algorithms [11] - [17] , and multiple sub- swarms PSO [18] [19] . These improvements have enhanced the performance of the classical PSO in varying degrees.

Quantum PSO (QPSO) is based on quantum mechanics. A quantum-inspired version of the classical PSO algorithm was first proposed in [20] . Later Sun et al. introduced the mean best position into the algorithm and proposed a new version of PSO, quantum-behaved particle swarm optimization [21] [22] . The QPSO algorithm permits all particles to have a quantum behavior instead of the Newtonian dynamics of the classical PSO. Instead of the Newtonian random walk, a quantum motion is used in the search process. The iterative equation of QPSO is very different from that of PSO, and the QPSO needs no velocity vectors for the particles. One of the most attractive features of the new algorithm is the reduced number of control parameters. Only one parameter must be tuned in QPSO, which makes it easier to implement. The QPSO algorithm has been shown to successfully solve a wide range of continuous optimization problems and many efficient strategies have been proposed to improve the algorithm [23] - [27] .

In order to enhance the optimization ability of QPSO by integrating quantum computation, we propose an improved quantum-behaved particle swarm optimization algorithm. In our algorithm, all particles are encoded by qubits described on the Bloch sphere. The three-dimensional Cartesian coordinates of qubits can be obtained from projective measurement. Since each qubit has three coordinate values, each particle has three locations. Each of the locations represents an optimization solution, which accelerates the search process by expanding the search scope of each variable from an interval on the number axis to an area of the Bloch sphere. The delta potential well is used to establish the search mechanism. Pauli matrices are used to perform the projective measurement, establish the rotation matrices and achieve qubits rotating about the rotation axis. The experimental results show that the proposed algorithm is superior to the original one in optimization ability.

2. The QPSO Model

In quantum mechanics, the dynamic behavior of a particle complies with the following Schrodinger equation


where denotes Planck’s constant, m denotes particle quality and V(r) denotes the energy distribution function.

In Schrodinger’s equation, the unknown is the wave function. According to the statistical interpretation of this wave function, the square of its magnitude denotes the probability density. Taking the delta potential well as an example, the design of QPSO is described as follows.

The potential energy distribution function of the delta potential well can be expressed as


where denotes the potential well depth.

Substituting Equation (2) into Equation (1), we can obtain a particle’s wave function,


where denotes the characteristic length of the delta potential well.

Therefore, a particle’s probability density function can be written as


To increase the probability of a particles moving towards the potential well’s center, Equation (4) must satisfy the following relationship,


From Equations (4) and (5), the characteristic length, L, must satisfy


In the potential well, the dynamic behavior of the particles obeys the Schrodinger equation, in that the particles’ locations are random at any time. However, the particles in classical PSO obey Newtonian mechanics, where the particles must have definite locations at any time. This contradiction can be satisfactorily resolved by means of the collapse of the wave function and the Monte Carlo method. We first take a random number u in the

range (0,1) and let, then the following result can be obtained


Using Equations (6) and (7), we can derive that. Let. It is then possible to derive. By letting,


The above formula is the iterative equation of QPSO [26] [27] .

3. The QPSO Improvement Based on Quantum Computing

In this section, we propose a Bloch sphere-based quantum-behaved particle swarm optimization algorithm called BQPSO.

3.1. The Spherical Description of Qubits

In quantum computing, a qubit is a two-level quantum system, described by a two-dimensional complex Hilbert space. From the superposition principles, any state of the qubit may be written as



Therefore, unlike the classical bit, which can only equal 0 or 1, the qubit resides in a vector space parameterized by the continuous variables and. The normalization condition means that the qubit’s state can be represented by a point on a sphere of unit radius, called the Bloch sphere. The Bloch sphere representation is useful as it provides a geometric picture of the qubit and of the transformations that can be applied to its state. This sphere can be embedded in a three-dimensional space of Cartesian coordinates (, ,). Thus, the state can be written as


The optimization is performed in, so the proposed method can be easily adapted to a variety of optimization problems.

3.2. The BQPSO Encoding Method

In BQPSO, all particles are encoded by qubits described on the Bloch sphere. Set the swarm size to m, and the space dimension to n. Then the i-th particle is encoded as



From the principles of quantum computing, the coordinates x, y, and z of a qubit on the Bloch sphere can be measured by the Pauli operators written as


Let denote the j-th qubit on the i-th particle. The coordinates (xij, yij, zij) of can be obtained by Pauli operators using




In BQPSO, the Bloch coordinates of each qubit are regarded as three paratactic location components, each particle contains three paratactic locations, and each location represents an optimization solution. Therefore, in

the unit space, each particle simultaneously represents three optimization solutions, which can be described as follows


3.3. Solution Space Transformation

In BQPSO, each particle contains 3n Bloch coordinates of n qubits that can be transformed from the unit space to the solution space of the optimization problem. Each of the Bloch coordinates corresponds to an optimization variable in the solution space. Let the j-th variable of optimization problem be, and

denote the coordinates of the j-th qubit on the i-th particle. Then the corresponding variables in the solution space are computed as follows





3.4. The Optimal Solutions Update

By substituting the three solutions described by the i-th particle into the fitness function, we may compute its fitness, for Let denote the best fitness so far, and denote the corresponding best particle. Let denote the own best fitness of the i-th particle, and denote the corresponding best particle. Further, let,. If then and. If then and.

3.5. Particle Locations Update

In BQPSO, we search on the Bloch sphere. That is, we rotate the qubit around an axis towards the target qubit. This rotation can simultaneously change two parameters and of a qubit, which simulates quantum behavior and enhances the optimization ability.

For the i-th particle, let denote the current location of the j-th qubit on the Bloch sphere, and and denote its own best location and the global best location on the Bloch sphere. According to [27] , for, the two potential well centers in Equation (8) can be obtained using



where m denotes the number of particles, r denotes a random number uniformly distributed in (0, 1), and k denotes the iterative step.

Let O denote the center of the Bloch sphere and denote the angle between and. From the QPSO’s iteration equation, to make move to, the angle needs to be rotated on the Bloch sphere so that


Let the qubit corresponding to the point be. From the above equation we know that the new location of is actually the location of after it is rotated through an angle towards.

To achieve this rotation, it is crucial to determine the rotation axis, as it can directly impact the convergence speed and efficiency of algorithm. According to the definition of the vector product, the rotation axis of rotating towards through an angle can be written as


From the principles of quantum computing, the rotation matrix about the axis that rotates the current qubit towards the target qubit can be written as


and the rotation operation can be written as


where and k denotes the iterative steps.

4. Experimental Results and Analysis

4.1. Test Functions

Many benchmark numerical functions are commonly used to evaluate and compare optimization algorithms. In this section, the performance of the proposed BQPSO algorithm is evaluated on 8 standard, unconstrained, single-objective benchmark functions with different characteristics, taken from [28] - [30] . All of the functions are minimization problems.









4.2. Experimental Setup

For all problems, the following parameters are used unless a change is mentioned. Population size: NP = 100 when D = 30 and NP = 80 when D = 20, the precision of a desired solution value to reach (VTR): VTR = 10 − 5

(i.e.) for and; VTR = 100 for and; VTR = 0.1 for; VTR = 10

for and; VTR = 0.001 for. The maximum of the number of function evaluations (MNFE): MNFE = 20000; The control parameter:; Halting criterion: when MNFE is reached, the execution of the algorithm is stopped.

To minimize the effect of the stochastic nature of the algorithms, 50 independent runs on 8 functions are performed and the reported indexes for each function are the average over 50 trials. If an algorithm finds the global minima with predefined precision within the preset MNFE the algorithm is said to have succeeded. Otherwise it fails. All of the algorithms were implemented in standard Matlab 7.0 and the experiments were executed on a P-II 2.0 GHz machine with 1.0 GB RAM, under the WIN-XP platform.

4.3. Performance Criteria

Five performance criteria were selected from [31] to evaluate the performance of the algorithms. These criteria are also used in [32] and are described as follows.

Error: The error of a solution is defined as, where is the global optimum of the function. The error was recorded when the MNFE was reached and the average (mean) and standard deviation (std dev) of the 50 error values were calculated.

NFE: When the VTR was reached, the number of function evaluations (NFE) was recorded. If the VTR was not reached within the preset MNFE then the NFE is equal to MNFE. The average (mean) and standard deviation (std dev) of the 50 NFE values were calculated.

Number of successful runs (SR): The number of successful runs was recorded when the VTR was reached before the MNFE was satisfied.

Running time (time (s)): Running time indicates the average time over one function evaluation.

4.4. Comparison Results

In this section, we compare our approach with the classical QPSO of [26] , to demonstrate the superiority of BQPSO. The parameters used for the two algorithms are described in Section 4.2. The results were calculated using 50 independent runs. Table 1 shows the mean and standard deviation of the errors of BQPSO and QPSO on 8 benchmark functions. The mean and standard deviation of NFE are shown in Table 2.

From Table 1 and Table 2, we can see that BQPSO performs significantly better than QPSO for 8 functions. For, , , BQPSO succeed in finding the minimum in all runs. For the other functions, BQPSO succeed much more often than QPSO. Furthermore, BQPSO obtains smaller mean and standard deviations than QPSO

Table 1. Comparison of the mean and standard deviation of the error of BQPSO and QPSO on 8 benchmark functions.

Table 2. Comparison of the mean and standard deviation of the NFE of BQPSO and QPSO on 8 benchmark functions.

for 8 functions. Especially, for, , , , , and, BQPSO succeeds many times while all runs of QPSO fail. In Table 1, we can see that there are significant differences in quality between the BQPSO and QPSO solutions of the high-dimensional functions.

In Table 2, the MNFE is fixed at 20000 for 8 functions. From this table it can be observed that, for all functions, BQPSO requires less NFE than QPSO. For some high-dimensional functions (such as, , , , and, QPSO fails to reach the VTR after 20,000 NFE while BQPSO is successful. It is worth noting that, from Table 1, the running time of BQPSO is about 10 to 20 times longer than that of QPSO. According to the no free lunch theorem, the superior performance of BQPSO is at the expense of a long running time.

It can be concluded that the overall performance of BQPSO is better than that of QPSO for all 8 functions. The improvement based on quantum computing can accelerate the classical QPSO algorithm and significantly reduce the NFE to reach the VTR for all of the test functions.

4.5. The Comparison of BQPSO with Other Algorithms

In this subsection, we compare BQPSO with other state-of-art algorithms to demonstrate its accuracy and performance. These algorithms include a genetic algorithm with elitist strategy (called GA), a differential evolution algorithm (called DE), and a bee colony algorithm (called BC). The BQPSO’s control parameter was. For the genetic algorithm, the crossover probability was and the mutation probability was. For the differential evolution algorithm, the scaling factor was, and the crossover probability was. For the bee colony algorithm, let denote the population size of the whole bee colony, and and denote the population size of the employed bee and onlooker bee, respectively.

We have taken. The threshold of a tracking bee searching around a mining bee was. The other parameters used for the four algorithms are the same as described in Section 4.2. The eight high-dimensional functions were used for these experiments, which had 50 independent runs. Table 3 shows the mean of these 50 errors and the number of successful runs. The mean and standard deviation of the NFE are shown in Table 4.

From Table 3 and Table 4 it can be argued that the BQPSO performed best among the four algorithms. It obtained the best results for all eight benchmark functions. The best algorithm is not as obvious for the remaining

Table 3. Comparison of the mean of the error and the number of successful runs of the four algorithms.

Table 4. Comparison of the mean of the error and the standard deviation of NFE of the four algorithms.

three algorithms. The DE algorithm performed well on average. It obtained the best results among the three algorithms for some benchmark functions, but it did not successfully optimize the functions f2, f4, and f7, because it got trapped in a local optimum. The BC achieved the best results among the three algorithms for the 30-dimensional functions f2, f4, and f7. The GA achieved the best results among three algorithms for the 20-dimensional functions f2 and f7. The DE achieved the best results among three algorithms for the 20-dimensional function f4. According to the experimental results, the algorithms can be ordered by optimizing performance from high to low as BQPSO , DE , BC, GA. This demonstrates the superiority of BQPSO.

These results can be easily explained as follows. First, In BQPSO, two parameters and of a qubit can be simultaneously adjusted by means of rotating the current qubit through an angle about the rotation axis. This rotation can automatically achieve the best matching of two adjustments. In other words, when the current qubit moves towards the target qubit, the path is the minor arc of the great circle on the Bloch sphere, which is clearly the shortest. Obviously, this rotation with the best matching of two adjustments has a higher optimization ability. Secondly, the three chains structure of the encoding particle also enhances the ergodicity of the solution space. These advantages are absent in the other three algorithms.

5. Conclusion

This paper presents an improved quantum-behaved particle swarm optimization algorithm. Unlike the classical QPSO, in our approach the particles are encoded by qubits described on the Bloch sphere. In this kind of coding method, each particle contains three groups of Bloch coordinates of qubits, and all three groups of coordinates are regarded as the approximate solutions describing the optimization result. As three solutions are synchronously updated in each optimization step (with the same swarm size as QPSO), our encoding method can extend the search range and accelerate the optimization process. In our approach, the particles are updated by rotating qubits through an angle about an axis on the Bloch sphere, and the rotation angles of qubits are computed according to the iteration equation of the classical QPSO. This kind of updating approach can simultaneously adjust two parameters of qubits, and can automatically achieve the best matching of two adjustments. The experimental results reveal that the proposed approach can enhance the optimization ability of the classical quantum-behaved particle swarm optimization algorithms, and for high dimensional optimization the enhancement effect is remarkable. In addition, our approach adapts quicker than the classical QPSO when the control parameter changes. Further research will focus on enhancing the computational efficiency of BQPSO without reducing the optimization performance.


  1. Kennedy, J. and Eberhart, R.C. (1995) Particle Swarms Optimization. Proceedings of IEEE International Conference on Neural Networks, 4, 1942-1948.
  2. Guo, W.Z., Chen, G.L. and Peng, S.J. (2011) Hybrid Particle Swarm Optimization Algorithm for VLSI Circuit Partitioning. Journal of Software, 22, 833-842.
  3. Hamid, M., Saeed, J., Seyed, M., et al. (2013) Dynamic Clustering Using Combinatorial Particle Swarm Optimization. Applied Intelligence, 38, 289-314.
  4. Lin, S.W., Ying, K.C. and Chen, S.C. (2008) Particle Swarm Optimization for Parameter Determination and Feature Slection of Support Vector Machines. Expert Systems with Applications, 35, 1817-1824.
  5. Yamina, M. and Ben, A. (2012) Psychological Model of Particle Swarm Optimization Based Multiple Emotions. Applied Intelligence, 36, 649-663.
  6. Cai, X.J., Cui, Z.H. and Zeng, J.C. (2008) Dispersed Particle Swarm Optimization. Information Processing Letters, 105, 231-235.
  7. Bergh, F. and Engelbrecht, A.P. (2005) A Study of Particle Swarm Optimization Particle Trajectories. Information Science, 176, 937-971.
  8. Chatterjee, A. and Siarry, P. (2007) Nonlinear Inertia Weight Variation for Dynamic Adaptation in Particle Swarm Optimization. Computers & Operations Research, 33, 859-871.
  9. Lu, Z.S. and Hou, Z.R. (2004) Particle Swarm Optimization with Adaptive Mutation. Acta Electronica Sinica, 32, 416-420.
  10. Liu, Y., Qin, Z. and Shi, Z.W. (2007) Center Particle Swarm Optimization. Neurocomputing, 70, 672-679.
  11. Liu, B., Wang, L. and Jin, Y.H. (2005) Improved Particle Swarm Optimization Combined with Chaos. Chaos Solitons & Fractals, 25, 1261-1271.
  12. Luo, Q. and Yi, D.Y. (2008) A Co-Evolving Framework for Robust Particle Swarm Optimization. Applied Mathematics and Computation, 199, 611-622.
  13. Zhang, Y.J. and Shao, S.F. (2011) Cloud Mutation Particle Swarm Optimization Algorithm Based on Cloud Model. Pattern Recognition & Artificial Intelligence, 24, 90-95.
  14. Zhu, H.M. and Wu, Y.P. (2010) A PSO Algorithm with High Speed Convergence. Control and Decision, 25, 20-24.
  15. Wang, K. and Zheng, Y.J. (2012) A New Particle Swarm Optimization Algorithm for Fuzzy Optimization of Armored Vehicle Scheme Design. Applied Intelligence, 37, 520-526.
  16. Salman, A.K. and Andries, P.E. (2012) A Fuzzy Particle Swarm Optimization Algorithm for Computer Communication Network Topology Design. Applied Intelligence, 36, 161-177.
  17. Mohammad, S.N., Mohammad, R.A. and Maziar, P. (2012) LADPSO: Using Fuzzy Logic to Conduct PSO Algorithm. Applied Intelligence, 37, 290-304.
  18. Zheng, Y.J. and Chen, S.Y. (2013) Cooperative Particle Swarm Optimization for Multi-Objective Transportation Planning. Applied Intelligence, 39, 202-216.
  19. Jose, G.N. and Enrique, A. (2012) Parallel Multi-Swarm Optimizer for Gene Selection in DNA Microarrays. Applied Intelligence, 37, 255-266.
  20. Sun, J., Feng, B. and Xu, W.B. (2004) Particle Swam Optimization with Particles Having Quantum Behavior. Proceedings of IEEE Conference on Evolutionary Computation, 1, 325-331.
  21. Sun, J., Feng, B. and Xu, W.B. (2004) A Global Search Strategy of Quantum-Behaved Particle Swarm Optimization. Proceedings of IEEE Conference on Cybernetics and Intelligent Systems, 1, 111-116.
  22. Sun, J., Xu, W.B. and Feng, B. (2005) Adaptive Parameter Control for Quantum-Behaved Particle Swarm Optimization on Individual Level. Proceedings of IEEE Conference on Cybernetics and Intelligent Systems, 4, 3049-3054.
  23. Said, M.M. and Ahmed, A.K. (2005) Investigation of the Quantum Particle Swarm Optimization Technique for Electromagnetic Applications. Proceedings of IEEE Antennas and Propagation Society International Symposium, Washington DC, 3-8 July 2005, 45-48.
  24. Sun, J., Xu, W.B. and Fang, W. (2006) Quantum-Behaved Particle Swarm Optimization Algorithm with Controlled Di- versity. Proceedings of International Conference on Computational Science, University of Reading, 28-31 May 2006, 847-854.
  25. Xia, M.L., Sun, J. and Xu, W.B. (2008) An Improved Quantum-Behaved Particle Swarm Optimization Algorithm with Weighted Mean Best Position. Applied Mathematics and Computation, 205, 751-759.
  26. Fang, W., Sun, J., Xie, Z.P. and Xu, W.B. (2010) Convergence Analysis of Quantum-Behaved Particle Swarm Optimi- zation Algorithm and Study on Its Control Parameter. Acta Physica Sinica, 59, 3686-3693.
  27. Said, M.M. and Ahmed, A.K. (2006) Quantum Particle Swarm Optimization for Electromagnetic. IEEE Transactions on Antennas and Propagation, 54, 2765-2775.
  28. Gao, W.F., Liu, S.Y. and Huang, L.L. (2012) A Global Best Artificial Bee Colony Algorithm for Global Optimization. Journal of Computational and Applied Mathematics, 236, 2741-2753.
  29. Adam, P.P., Jaroslaw, J. and Napiorkowski, A.K. (2012) Differential Evolution Algorithm with Separated Groups for Multi-Dimensional Optimization Problems. European Journal of Operational Research, 216, 33-46.
  30. Liu, G., Li, Y.X., Nie, X. and Zheng, H. (2012) A Novel Clustering-Based Differential Evolution with 2 Multi-Parent Crossovers for Global Optimization. Applied Soft Computing, 12, 663-681.
  31. Suganthan, P.N., Hansen, N. and Liang, J.J. (2005) Problem Definitions and Evaluation Criteria for the CEC2005 Special Session on Real Parameter Optimization.
  32. Noman, N. and Iba, H. (2008) Accelerating Differential Evolution Using an Adaptive Local Search. IEEE Transactions on Evolutionary Computation, 12, 107-125.