Applied Mathematics
Vol.3 No.10A(2012), Article ID:24095,9 pages DOI:10.4236/am.2012.330184

Enhanced Particle Swarm Optimization Based Local Search for Reactive Power Compensation Problem

Abd Allah A. Mousa1,2, Mohamed A. El-Shorbagy2

1Department of Mathematics and Statistics, Faculty of Sciences, Taif University, Taif, KSA

2Department of Basic Engineering Science, Faculty of Engineering, Menoufia University, Shibin El Kom, Egypt

Email: a_mousa15@yahoo.com

Received June 10, 2012; revised July 10, 2012; accepted July 17, 2012

Keywords: Multiobjective Optimization; Particle Swarm Optimization; Local Search

ABSTRACT

This paper presents an enhanced Particle Swarm Optimization (PSO) algorithm applied to the reactive power compensation (RPC) problem. It is based on the combination of Genetic Algorithm (GA) and PSO. Our approach integrates the merits of both genetic algorithms (GAs) and particle swarm optimization (PSO) and it has two characteristic features. Firstly, the algorithm is initialized by a set of a random particle which traveling through the search space, during this travel an evolution of these particles is performed by a hybrid PSO with GA to get approximate no dominated solution. Secondly, to improve the solution quality, dynamic version of pattern search technique is implemented as neighborhood search engine where it intends to explore the less-crowded area in the current archive to possibly obtain more nondominated solutions. The proposed approach is carried out on the standard IEEE 30-bus 6-generator test system. The results demonstrate the capabilities of the proposed approach to generate true and well-distributed Pareto optimal nondominated solutions of the multiobjective RPC.

1. Introduction

Reactive Power Compensation (RPC) in power systems is a very important issue in the expansion planning and operation of power systems because it leads to increased transmission capability, reduced losses and improved power factor using shunt capacitors that have been very commonly installed in transmission and distribution networks [1,2]. By applying capacitors adjacent to loads, several advantages are obtained some of them are [3-5]:

1)    Improved power factor

2)    Reduced transmission losses

3)    Increased transmission capability

4)    Improved voltage control

5)    Improved power quality.

Achievement of these items depends mainly on an adequate allocation of shunt capacitor banks. Thus, this problem can be stated as the determination of the locations and the capacities of new sources of reactive power, searching simultaneously to accomplish the following goals:

1)    A good bus tension profile: The quality of service is directly related to the difference between the effective and the nominal bus voltage;

2)    Minimization of transmission losses: Active power transmission losses can be directly translated into monetary losses since they are the main component in the difference between the generated power and the consumed power;

3)    Minimization of the amount of reactive compensation installed: Although shunt capacitor compensation generally provides the most economical reactive power source for voltage control, heavy use of these devices could lead to the reduction of stability margins and poor voltage regulation [6,7].

Traditionally, this problem is addressed as a Single Objective Optimization Problem (SOP) [8-14]. A SingleObjective Optimization Algorithms (SOA) usually provides a unique optimal solution. Typically, the objecttive function is formulated as a linear combination of several factors such as investment or transmission losses, that are subject to operational constrains such as reliability and voltage profile. These factors that are considered as the optimization objectives usually are contradictory, making very difficult to find the right linear combination. Practically most problems have more than one objective to be optimized, e.g. RPC problem requires the optimization of investment, power losses, and voltage profile. The objectives are usually contradictory. Accordingly a single objective optimization algorithm will not be preferable to solve the RPC problem. Considering this situation, Multiobjective Optimization Algorithms (MOA) were proposed to optimize independent and simultaneously several objectives [15-17]. Therefore, a MOA usually provides a whole set of optimal tradeoff solutions known as Pareto set. The Pareto set gives the engineer the opportunity to consider more options before making a final decision.

Local search techniques have long been used to attack many optimization problems [18,19]. The basic idea is to start from an initial solution and to search for successive improvements by examining neighboring solutions. The local search used in this paper is based on a dynamic version of pattern search technique. Pattern search technique is a popular paradigm in Direct Search (DS) methods [20]. DS methods are evolutionary algorithms used to solve constrained optimization problems. DS methods, as opposed to more standard optimization methods, are often called derivative-free as they do not require any information about the gradient or higher derivatives of the objective function to search for an optimal solution. Therefore direct search methods may very well be used to solve non-continuous, no differentiable and multimodal (i.e. multiple local optima) optimization problems.

To solve the RPC problem, this paper presents a new approach local search based on hybrid particle swarm optimization PSO. It combines two heuristic optimization techniques GA and PSO. Our approach integrates the merits of the two optimization techniques. In order to improve the solution quality, we implement modified local search algorithm. Finally, the standard IEEE 30- bus 6-genrator test system then used to verify the validity of the proposed approach.

This paper is organized as follows. In section 2, MOO is described. Section 3, provides a Multi-objective Formulation of RPC Problem. In section 5, the proposed algorithm is presented. Implementation of the proposed approach is presented in section 6. Results are given in section 6. Finally, section 7 gives a brief conclusion about this study.

2. Multiobjective Optimization

A Multi-objective Optimization Problem (MOP) can be defined as determining a vector of design variables within a feasible region to minimize a vector of objective functions that usually conflict with each other. Such a problem takes the form:

where X is vector of decision variables; fi(X) is the ith objective function; and g(X) is constraint vector. A decision vector X is said to dominate a decision vector Y (also written as) if: for all and for at least one. All decision vectors that are not dominated by any other decision vector are called no dominated or Pareto-optimal. These are solutions for which no objective can be improved without detracting from at least one other objective.

3. Multiobjective Formulation of RPC

The following assumptions are considered in the formulation of the problem:

1)    A shunt-capacitor bank cost per MVAr is the same for all busbars of the power system;

2)    Power system is considered only at peak load.

Based on these considerations [20,21], three objective functions fi(·), f1(·) f2(·) (to be minimized) have been identified for the present work: fi(·) and f2(·) are related to investment and transmission losses, while f3(·) are related to quality of service.

The objective functions to be considered are:

3.1. F1: Investment in Reactive Compensation Devices

where for simplicity the cost per MVAr is taken as unity (α = 1), n is the number of buses in the power system; F1 is the total required compensation; F1max is the maximum amount available for investment; Bi is the compensation at busbar i measured in MVAr and Bimax is the maximum compensation allowed at a particular bus of the system.

3.2. F2: Active Power Losses

where: F2 is the total transmission active losses of the power system in MW; Pg is the total active power generated in MW and Pl is the total load of the system in MW.

3.3. F3: Average Voltage Deviation

where: F3 is the per unit (pu) average voltage difference; V is the actual voltage at busbar i (pu) and is the desired voltage at busbar i (pu).

In summary, the optimization problem to be solved is the following:

where

subject to

and the load flow equations [22]:

where, PGp, QGp are the real and reactive power generations at bus P; Pcp, Qcp the real and reactive power demands at bus P ; Vp, the voltage magnitude at bus P; Vq, the voltage magnitude at bus q; δp, the voltage angle at bus p; δq; the voltage angle at bus q; Ypq, the admittance magnitude; θpq, the admittance angle; NB, the total number of buses; P = 1, 2, ···, NB and q = 1, 2, ···, NB.

The load flow equations reflect the physics of the power system as well as the desired voltage set points throughout the system. The physics of the power system are enforced through the power flow equations which require that the net injection of real and reactive power at each bus sum to zero.

To represent the amount of reactive compensation to be allocated at each busbar i, a decision vector B [23], is used to indicate the size of each reactive bank in the power system, i.e.:

Thus RPC is a complex combinatorial optimization problem involving multiple nonlinear functions having multiple local minima, which may be ill-defined and nonlinear with discontinuous constraint, which lead to non-convex Pareto-optimal front [23,24].

3.4. Formulation of MORPC

The multiobjective reactive power compensation optimization problem is therefore formulated as:

S.t.

.

4. The Proposed Approach

In this section we present a novel optimization algorithm to solve the RPC problem formulated in the previous section. The proposed methodology introduces a hybrid approach combining GAs and PSO to improve the performance of each algorithm. Also, to improve the solution quality we implement LS technique as neighborhood search engine where it intends to explore the lesscrowded area in the current archive to possibly obtain more nondominated solutions nearby (i.e. that we search near every solution by LS technique to obtain a new solution best than current one or nondominated with it and therefore the less-crowded area will be discovered automatically). The description diagram of the proposed algorithm is shown in Figure 1, and it is described as follows:

4.1. PSO Stage

In this stage, we implement PSO as follows:

Step 1: Initialize parameters for PSO, initialize randomly N particles with position with velocities where t is the time counter and i = 1, ···, N.

Step 2: Identify the local set for each particle as. Also, identify the local preferred element of the i-th particle as.

Step 3: Collect all local sets in a pool C such that

.

Step 4: Define a global set, where we assume that the function ND(·) can get all nondominated solutions.

Step 5: In the objective space, The distance between and the members in Gt=0 are measured using the Euclidean distance, where the distance between any two d-dimensional points and is given by

Figure 1. The description diagram of the proposed algorithm.

The nearest member in Gt to the i-th particle set as the global preferred element.

Step 6: Set the external set Et=0 equal to Gt=0.

In an example, let we have 6 particles initially located as shown in Figure 2.

• Define the local set

• Define the local preferred element

• Construct a pool C such that

• 

•  Define a global set• 

•  Identify the global preferred element

Figure 2. Location of initially 6 particles.

• 

•  Define External set

Step 7: Update particles: Update the velocity and position of each particle to get new velocity and position according to the following equations:

where i = 1, 2, ···, N, and N is the size of the population; w is the inertia weight; c1 and c2 are two positive constants, called the cognitive and social parameter respectively; r1 and r2 are random numbers uniformly distributed within the range [0,1].

Step 8: Evolution of particles: To restrict velocity and control it, We present a modified constriction factor (i.e., dynamic constriction factor) to keep the feasibility of the particles. e.g., Figure 3 shows the movement of the particle i through the search space with and without modified constriction factor. Where the particle i start at position with velocity in the feasible space, the new position depends on velocity. Then, makes the particle to lose its feasibility, so we introduce a modified factor x such that:

where, τ is the age of the infeasible particle (i.e., How long it’s still unfeasible) and it is increased with the number of failed trial to keep the feasibility of the particle. The new modified position of the particle is computed as:

For each particle we check its feasibility, if it is infeasible, we implement x parameter to control its position and velocity according to Algorithm 1.

Step 9: Update local set to get: The new position of each particle is added to to form which is updated according to Algorithm 2.

Figure 3. The movement of the particle i through search space.

Algorithm 1. Evolution of particles.

Step 10: Update global set G:

which contain all nondominated solution of.

Step 11: Update external set Et: Copy the members of Gt+1 to Et and apply dominance criteria to remove all dominated solution from Et (i.e., each member of Gt+1 has three probabilities as in Algorithm 3).

Step 12: Update local preferred element and global preferred element for each particle: In the objective space, The distance between and members in are measured using Euclidean distance. The nearest member in to the i-th particle set as . Also, The distance between and the members in Gt + 1 are measured using Euclidean distance. The nearest member in Gt + 1 to the i-th particle set as the global preferred .

4.2. GA Stage

In this subsection, we describe the procedure of GA.

Step 1: Initialize parameters for GA.

Step 2: Evaluation & Ranking: A way to transform the values of objective functions to the fitness function of

Algorithm 2. Update local set .

Algorithm 3. Update external set .

each string in the genotype world is to combine the m objective functions into a scalar function as follows:

where f(x) is the fitness function of x and w1, ···, wm are non-negative weights which determined as follows:

where random1, random2, ···, randomm, are non-negative random integers.

Then rank them on the basis of the fitness values.

Step 3: Selection: Selection is an operator to select two parent strings for generating new strings (i.e., offspring). In the selection, a selection probability Ps(xi) of each string x based on the linear scaling is defined by the roulette wheel selection as follows:

where fmin(xψ) is the minimum fitness value (i.e., the worst fitness value) in the current population ψ. According to this selection probability, a pair of parent strings are selected from the current population ψ.

Step 4: Crossover: Crossover is an operator to generate new strings (i.e., offspring) from parent strings according to the crossover probability (Pc). Various crossover operators have been proposed for GAs [25,26]. In the proposed approach we implement single point crossover.

Step 5: Mutation: Mutation is an operator to change elements in a string which is generated by a crossover operator. Such a mutation operator can be viewed as a transition from a current solution to its neighborhood solution in local search algorithms [27] according to the mutation probability (Pm). In the proposed approach, we mutate each variable in a string with Pm by addition of small random values according to the equations below:

where r is a random number, tmax is the maximum number of generations, and β is a positive constant chosen arbitrarily.

Step 6: Elitist strategy (Replacing): Randomly remove a string from the current population and add the best string in the previous population to the current one.

Step 7: Repairing: Repair the infeasible individuals of the population to be feasible. The idea of this technique is to separate any feasible individuals in a population from those that are infeasible by repairing infeasible individuals. This approach co-evolves the population of infeasible individuals until they become feasible, the reader is referred to [28].

4.3. The Local Search

The local search phase is implemented as a dynamic version of pattern search technique. Pattern search technique is a popular paradigm in Direct Search (DS) methods. DS methods are evolutionary algorithms used to solve constrained optimization problems. DS methods, as opposed to more standard optimization methods, are often called derivative-free as they do not require any information about the gradient or higher derivatives of the objective function to search for an optimal solution. Therefore direct search methods may very well be used to solve noncontinuous, nondifferentiable and multimodal (i.e. multiple local optima) optimization problems. This study examines the usefulness of a dynamic version of pattern search technique to improve the solution quality of MOPs. The search procedure looks for the best solution “near” another solution by repeatedly making small changes to a starting solution until no further improved solutions can be found.

The local search is started by loading the Pareto solutions for a given MOPs. At iteration t, we have an iterate, where the changes on the values for each dimension (i = 1, 2, ···, n) can be implemented as

where r is the random number in the range [0, 1]; T is the maximum number of iterations; R is the search radius.

Let ei, (i = 1, 2, ···, n), denote the standard unit basis vectors. We successively look at the points x+ = xt ± Δ(t)ei (i = 1, 2, ···, n), until we find x+ for which for at least one objective. If we find no x+ such that, then x+ = xt. Then we update the Pareto solutions by non dominated ones and the dominated ones are removed. This situation is represented in Figure 4 for the case in R2. Without loss of generality, the elements discussed above are synthesized to evolve the proposed approach. The flow chart of MACO and the local search phase is shown in Figure 5.

This local search scheme is implemented on all nondominated solutions in Et to get the true Pareto optimal solution and to explore the less-crowded area in the external archive.

5. Implementation of the Proposed Approach

The described methodology is applied to the standard IEEE 30-bus 6-generator test system to investigate the

Figure 4. Mechanism of dynamic pattern search in.

Figure 5. The pseudo code of the proposed algorithm.

effectiveness of the proposed approach. The detailed data for this system are given in [29]. Table 1 lists the parameter setting used in the algorithm for all runs.

6. Results and Discussions

Figure 6, shows well distributed Pareto-optimal nondominated solutions obtained by the proposed algorithm after 200 generations.

It is clear from the figure that Pareto-optimal set is well distributed and has satisfactory diversity characteristics. This is useful in giving a reasonable freedom in choosing compensation devices from the available commercial devices.

Out of the Pareto-optimal set Table 2 shows the values of f1(·), f2(·) and f3(·) in the three cases 1, 2, and 3 corresponding to minimum amount of: reactive compensation devices, active power losses and average voltage deviation respectively obtained by proposed algorithm.

Offered in this section that the proposed approach is able to obtain the approximate Pareto set. The proposed approach has been used to increase the solution quality by combing the two merits of two heuristic algorithms. However, the goal is not only to increase the solution quality, but also to generate a representative subset, which maintains the characteristics of the general set and take the solution diversity into consideration.

On the other hand, classical techniques aim to give single point at each iteration of problem solving by converting the multiobjective problem to a single objective problem by linear combination of different objectives as

Table 1. The proposed approach parameter.

Table 2. Values of f1(·), f2(·), and f3(·) in three cases.

Figure 6. Pareto optimal front of the proposed approach.

a weighted sum. On the contrary, the proposed approach is a heuristics-based multiobjective optimization technique where, it uses a population of solutions in their search, multiple Pareto-optimal solutions can, in principle, be found in one single run.

Another advantage is the reality of using the proposed approach to handle complex problems of realistic dimensions has been approved due to procedure simplicity.

7. Conclusion

The reactive power compensation problem formulated as multiobjective optimization problem with competing amount of reactive compensation devices, active power losses and average voltage deviation is solved in this paper using a combination of GA and PSO. Our approach integrates the merits of both GA and PSO. In order to improve the solution quality, we implement LS technique as neighborhood search engine where it intends to explore the less-crowded area in the current archive to possibly obtain more nondominated solutions. The algorihm have an external archive to keep track of all the feasible solutions found during the optimization and therefore do not have any restrictions on the number of the Pareto-optimal solutions found. The proposed approach is carried out on the standard IEEE 30-bus 6-generator test system. The results demonstrate the capabilities of the proposed approach to generate true and well-distributed Pareto-optimal nondominated solutions of the multiobjective RPC. The result also confirms the proposed approach potential to solve the multiobjective RPC problem.

REFERENCES

  1. P. R. Sujin, T. Ruban Deva Prakash and M. Mary Linda, “Particle Swarm Optimization Based Reactive Power Optimization,” Journal of Computing, Vol. 2, No. 1, 2010, pp. 73-78.
  2. F. Yoshikazu, “Comparative Studies of Particle Swarm Optimization Techniques for Reactive Power Allocation Planning in Power Systems,” IEEE Transactions on Power and Energy, Vol. 124, No. 5, 2004, pp. 690-696. doi:10.1541/ieejpes.124.690
  3. J. Lin, X. D. Wang and W. H. Zheng, “Reactive Power Optimization Based on Adaptive Immune Algorithm,” International Journal of Emerging Electric Power Systems, Vol. 10, No. 4, 2008, pp. 1499-1503.
  4. P. Kundur, “Power System Stability and Control,” Mc Graw-Hill, New York, 1993.
  5. X. Zhang, W. Chen, C. Dai and A. Guo, “Self-Adaptive Differential Evolution Algorithm for Reactive Power Optimization,” Fourth International Conference on Natural Computation, Jinan, 18-20 October 2008, pp. 560-564. doi:10.1109/ICNC.2008.355
  6. S. K. Nandha Kumar and Dr. P. Renuga, “Reactive Power Planning Using Real GA Comparison with Evolutionary Programming,” International Journal of Recent Trends in Engineering, Vol. 1, No. 3, 2009.
  7. D. Thukaram and G. Yesuratnam, “Optimal Reactive Power Dispatch in a Large Power System with AC-DC and FACTS Controllers,” IET Generation, Transmission & Distribution, Vol. 2, No. 1, 2008, pp. 71-81. doi:10.1049/iet-gtd:20070163
  8. J. Carlisle, A. El-Keib, D. Boyd and K. Nolan, “A Review of Capacitor Placement Techniques on Distribution Feeders,” Proceedings of the 29th Southeastern Symposium on System Theory, Cookeville, 9-11 March 1997, pp. 359-365. doi:10.1109/SSST.1997.581664
  9. M. Delfanti, G. Granelli, P. Marannino and M. Montagna, “Optimal Capacitor Placement Using Deterministic and Genetic Algorithms,” Proceedings of the 21st 1999 IEEE International Conference, Vol. 15, No. 3, 2000, pp. 1041- 1046.
  10. J. Lin and X. Wang, “Reactive Power Optimization Based on Adaptive Immune Algorithm,” International Journal of Emerging Electric Power Systems, Vol. 10, No. 4. 2009, pp. 1499-1503. doi:10.2202/1553-779X.2079
  11. Y. Liu, L. Ma and J. Zhang, “Reactive Power Optimization by GA/SA/TS Combined Algorithms,” International Journal of Electrical Power & Energy Systems, Vol. 24, No. 9, 2002, pp. 765-769. doi:10.1016/S0142-0615(01)00087-4
  12. J. Lu, L. Zhang, H. Yang and J. Du, “Improved Strategy of Particle Swarm Optimisation Algorithm for Reactive Power Optimisation,” International Journal of Bio-Inspired Computation, Vol. 2, No. 1, 2010, pp. 27-33.
  13. J. T. Ma and L. L. Lai, “Evolutionary Programming Approach to Reactive Power Planning,” IEEE Proceedings of Generation, Transmission and Distribution, Vol. 143, No. 4, 1996, pp. 365-370. doi:10.1049/ip-gtd:19960296
  14. K. Mahadevan and P. S. Kannan, “Comprehensive Learning Particle Swarm Optimization for Reactive Power Dispatch,” Applied Soft Computing Archive, Vol. 10, No. 2, 2010, pp. 641-652. doi:10.1016/j.asoc.2009.08.038
  15. M. A. Abido, “Multiobjective Evolutionary Algorithms for Electric Power Dispatch Problem,” IEEE Translation on Evolutionary Computation, Vol. 10, No. 3, 2006, pp. 315-329. doi:10.1109/TEVC.2005.857073
  16. C. H. Antunesa, D. F. Pires, C. Barrico, Á. Gomesa and A. G. Martinsa, “A Multi-Objective Evolutionary Algorithm for Reactive Power Compensation in Distribution Networks,” Applied Energy, Vol. 86, No. 7-8, 2009, pp. 977- 984. doi:10.1016/j.apenergy.2008.09.008
  17. M. Azzam and A. A. Mousa, “Using Genetic Algorithm and TOPSIS Technique for Multiobjective Reactive Power Compensation,” Electric Power Systems Research, Vol. 80, No. 6, 2010, pp. 675-681. doi:10.1016/j.epsr.2009.10.033
  18. K. Sindhya, A. Sinha, K. Deb and K. Miettinen, “Local Search Based Evolutionary Multiobjective Optimization Algorithm for Constrained and Unconstrained Problems,” 2009 IEEE Congress on Evolutionary Computation, Trondheim, 18-21 May 2009, pp. 2919-2926. doi:10.1109/CEC.2009.4983310
  19. K. Harada, J. Sakuma, I. Ono and S. Kobayashi, “Local Search for Multiobjective Optimization: Pareto Descent Method,” Proceedings of the 18th Symposium on Decentralized Autonomous Systems of the Society of Instrument and Control Engineers, 2006, pp. 345-350.
  20. R. Hooke and T. A. Jeeves, “Direct Search Solution of Numerical and Statistical Problems,” Journal of the ACM, Vol. 8, No. 2, 1961, pp. 212-229. doi:10.1145/321062.321069
  21. H. Dommel and W. Tinney, “Optimal Power Flow Solutions,” IEEE Translation on Power Apparatus and Systems, Vol. 87, No. 10, 1968, pp. 1866-1876. doi:10.1109/TPAS.1968.292150
  22. J. D. Glover and M. Sarma, “Power System Analysis and Design,” PWS Publishing Company, Boston, 1994.
  23. K. Deb, “Multi-Objective Optimization Using Evolutionary Algorithms,” Wiley, New York, 2001.
  24. K. Miettinen, “Non-Linear Multiobjective Optimization,” Kluwer Academic Publisher, Dordrecht, 2002.
  25. D. E. Goldberg, “Genetic Algorithms in Search, Optimization & Machine Learning,” Addison-Wesley, Reading, 1989.
  26. M. Tanaka, “GA-based Decision Support System for MultiCriteria Optimization,” IEEE International Conference on Systems, Man and Cybernetics, Vol. 2, 1995, pp. 1556- 1561.
  27. M. F. Bramlette, “Initialization Mutation and Selection Methods in Genetic Algorithms for Functions Optimization,” Proceedings of the 4th International Conference on Genetic Algorithms, San Diego, 13-16 July 1991, pp. 100- 107.
  28. A. A. Mousa and I. M. El-Desoky, “GENLS: Co-Evolutionary Algorithm for Nonlinear System of Equations,” Applied Mathematics and Computation, Vol. 197, No. 2, 2008, pp. 633-642. doi:10.1016/j.amc.2007.08.088
  29. R. Zimmerman and D. Gan, “MATPOWER: A Matlab Power System Simulation Package,” 1997. http://vivo.cornell.edu/display/AI-33123309569