American Journal of Operations Research
Vol. 3  No. 3 (2013) , Article ID: 31681 , 6 pages DOI:10.4236/ajor.2013.33036

A General Class of Convexification Transformation for the Noninferior Frontier of a Multiobjective Program

Tao Li1, Yanjun Wang2, Zhian Liang2

1Department of Statistics, Missouri University of Science & Technology, Rolla, USA

2Department of Mathematics Shanghai University of Finance and Economics, Shanghai, China

Email: tl6gc@mail.mst.edu

Copyright © 2013 Tao Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received March 7, 2012; revised April 17, 2012; accepted May 2, 2012

Keywords: Noninferior Frontier; Convexification; Weighting Method; Multiobjective Optimization

ABSTRACT

A general class of convexification transformations is proposed to convexify the noninferior frontier of a multiobjective program. We prove that under certain assumptions the noninferior frontier could be convexified completely or partly after transformation and then weighting method can be applied to identify the noninferior solutions. Numerical experiments are given to vindicate our results.

1. Introduction

In this paper, we consider the following multiobjective optimization problem:

where k > 1 and is a decision vector, , are objective functions and, are constraint functions.

Let be the feasible region in the decision space and

be the feasible region in the objective space. A solution to Problem (P) is called noniferior solution if there is no other feasible solution such that, , with at least one strict inequality.

Let be the set of all the noninferior solutions in the decision space and

be the set of noninferior points in the objective space and is also called the noninferior frontier of Problem (P).

An important problem in multiobjective optimization is to find the set of noninferior solutions. Many methods that are intended to identify noninferior solutions have been proposed such as the weighting method, weighting p-norm method, the ∞-norm method and the ξ-constraint method. Among these methods the weighting method is one of the simplest methods. In fact, the weighting method transforms multiple objectives into the following weighted sum by introducing weighting vector (w1,∙∙∙wk):

where

. It is well-known that the optimal solution of Problem (SP) is the noninferior solution of Problem (P). Let be an optimal solution of Problem (SP) with, then we have

By the definition of supporting hyperplane we know that there exists a supporting hyperplane of at, which is. Thus the existence of a supporting hyperplane at the noninferior solution in the -space which separates all the noninferior points one side is a necessary condition to guarantee the successful finding of noninferior solutions of Problem (P) by using weighting method. However, in many nonconvex circumstances, supporting hyperplane does not exist at some points of the noninferior frontier. Therefore, weighting method always fails to identify all the noninferior solutions in these cases.

Recently, convexification method has been successfully adopted in many subjects of optimization. For example, in [1-3] a series of convexification methods are proposed to process some classes of global optimization problems with certain monotone properties and in [4,5] convexification schemes are presented to convexify the perturbation function and Lagrangian function in the dual search methods for nonlinear programming. In [6,7], a general convexification and concavification scheme are proposed for certain classes of monotone and nonmonotone optimization problems. The scheme converts the problems into classes of concave and reverse convex programming problems with better structures. Li et al derived a general convexification method for nonconvex minimization problems in [8]. Their method transforms the problems into convex ones and thus the local techniques can be used to solve the new problems. A reciprocal transformation for the convexification of posynomial programs with positive variables are presented in [9]. In [10-12], p-power and exponential generating method were used as a special convexification transformations and they proved that under certain assumptions, by applying the p-power or exponential generating method to objective function, the noninferior frontier of a multiobjective problem can be convexified completely or partly and then the weighting method can be applied to identify the noninferior solutions. However, due to the various forms of objective functions, p-power might not always serve as an efficient transformation. Thus the choice range of such transformations should be enlarged.

The main purpose of this paper is to present a class of general convexification transformation methods to convexify the noninferior frontier of a multiobjective problem. Compared with previous works, the major contributions of our paper are as follows:

• We prove that the noninferior frontier could be convexified completely or partially by applying a more general transformation under certain assumptions. Also, we generalized the results in [10].

• Our transformation further expands the class of multiobjective program that weighting method could solve by designing the transformation function based on the objective function. Our transformation can handle practical problems more efficiently than the one in [10] as well.

The paper is organized as follows: in Sections 2 and 3, a general form of transformation is proposed and then we prove that under some assumptions the noninferior frontier could be convexified completely or partly. In Section 4 some examples are given to vindicate our results. We give a conclusion about this paper in the last section.

2. Convexification of Noninferior Frontier

As in [10], the noninferior frontier of Problem (P) can be expressed as

where is a nonincreasing function of at.

Consider the following transformation of the objective functions

where is strictly increasing where is a parameter, and. Then Problem (P) is transformed to a new problem which reads

Let be the sets of all the noninferior solutions for Problem (CP), then.

Proof. For any point, if, then such that

, with at least one strict inequality. Without loss of generality, we assume. Since is strictly increasing, we must have, with strict inequality holds at. Then is not a noninferior point which contradicts the assumption. So,. Therefore,. Similarly, it can be shown that.

The noninferior frontier of Problem (CP) can be expressed as

Let be the Hessian matrix of and, where is the minimum eigenvalue of. Further, we make the following assumptions:

I) is a twice continuous differentiable function and is negative for all, and,

, where is positive. Moreover,

II) is twice differentiable and

where is a positive number.

III) is a compact set.

Then we have the following theorem: Suppose that assumptions I)-III) are satisfied. Then there exists a such that when the noninferior frontier of Problem (CP) is convex.

Proof. Let, , i = 1, 2,∙∙∙, k − 1, and denote the Hessian matrix of.

Then we have

when

Let

(2.1)

Then

(2.2)

From (2.1) and (2.2), we have that is a positive definite matrix if and only if is a positive definite matrix.

We assume that, otherwise is already positive definite.

Further, let

where denotes the unit sphere in.

By II) and III), we have

Therefore, for, there exists such that for any, we have

Then, we have

Then is a positive definite matrix when. Therefore, the noninferior frontier of Problem (CP) is convex when and we complete the proof.

Corollary 2.2 Suppose that assumptions I)-III) are satisfied. Then there is a such that the supporting hyperplane exists everywhere on the noninferior frontier of Problem (CP) when.

Proof. By theorem 2.2, there exists a such that the noninferior frontier is convex everywhere when, and then by [13] we know that the supporting hyperplane exists everywhere.

Further by the discussion above, we can obtain the following corollary:

Corollary 2.2 Suppose that assumptions I)-III) hold. Then is a noninferior solution of Problem (P) if and only if there exists a such that is an optimal solution of Problem (CP) with when.

Remark 1. If we set, it is easy to verify that satisfies assumption II). The primal problem could be transformed into the following problem:

(2.3)

Note that (2.3) is exactly the transformation proposed in [10].

Remark 2. We can derive other types of transformations by constructing many specific function forms satisfying assumption II). For example, each of functions

, , , , where

and, could be used as.

3. Local Convexification of the Noninferior Frontier

In some cases, assumptions I)-III) may not hold simultaneously. For instance, might not be twice continuous differentiable or. In these circumstances, it might be difficult to globally convexify the noninferior frontier, however, we can achieve the local convexification of the noninferior frontier. Assume that assumptions I)-III) hold in a compact neighborhood, then there exists a such that the Hessian matrix of is positive definite for in the -space when.

Proof. Theorem 3.1 could be vindicated by the similar way used in the proof of Theorem 2.2.

Then similar to corollaries 2.1 and 2.2, we have the following corollaries:

Corollary 3.1 Suppose that assumptions I)-III) hold in a compact neighborhood, then there exists a p1 such that supporting hyperplane exists on the confined noninferior frontier sector

for all interior points of when.

Corollary 3.2 Assume that a noninferior solution has a compact neighborhood and assumptions I)-III) hold in, then for large enough there exists a such that is a local optimal solution of Problem (P) with.

4. Numerical Experiments

Example 1: Consider the following example:

The noninferior frontier of Problem (1) is

Figure 1 depicts the feasible region of Problem (1). From Figure 1 we know that the noninferior frontier of Problem (1) is not convex, thus weighting method would not identify all the noninferior solutions in this case.

Let and, then the Problem (1) could be transformed to the following problem:

Then the noninferior frontier of Problem (2) is

Figure 2 depicts the noninferior frontier of Problem (2). Clearly, the noninferior frontier of Problem (2) is convex and then we can identify all the noninferior solution of Problem (2) by applying the weighting method. Also, it’s worthwhile to point out that compared to the transformation we used, the p-power transformation may produce a much more complicate problem.

Example 2: Consider the following example:

Figure 1. Feasible region and noninferior frontier of Problem (1).

Figure 2. Feasible region and noninferior frontier of Problem (2).

Figure 3 depicts the feasible region of Problem (3). From Figure 3 we know that the noninferior frontier of Problem (3) is nonconvex. Note that the noninferior frontier of Problem (3) can be expressed as

And the hessian matrix of is.

It is evident that

which contradicts assumption I). Thus we might not be able to completely convexify the noninferior frontier of Problem (3) by using our transformation method. However we could achieve the local convexification.

Let and, then the Problem (3) could be transformed to the following problem:

Obviously, as shown in Figure 4, the noninferior frontier of Problem (4) is locally convex and then we can identify part of the noninferior solutions of Problem (4) by applying the weighting method.

5. Conclusion

As one of the simplest methods to identify the noninferior solutions of multiobjective problems, weighting method fails in many nonconvex cases. In this paper, a general class of convexification transformations is presented and we prove that the transformation could con-

Figure 3. Feasible region and noninferior frontier of Problem (3).

Figure 4. Feasible region and noninferior frontier of Problem (4).

vexify the noninferior frontier completely or partly under assumptions and then weighting method can be used successfully. This paper expands greatly the class of multiobjective programs that weighting method can cope with and provides more specific transformations to tackle practical problems efficiently.

6. Acknowledgements

The research was supported by National Natural Science Foundation of China under Project 10261005 and partially supported by National Natural Science Foundation of China under Project 10601030.

REFERENCES

  1. D. Li, X. L. Sun, M. P. Biswal and F. Gao, “Convexification, Concavification and Monotonization in Global Optimization,” Annals of Operations Research, Vol. 105, No. 1-4, 2001, pp. 213-226. doi:10.1023/A:1013313901854
  2. X. L. Sun, K. McKinnon and D. Li, “A Convexification Method for a Class of Global Optimization Problem with Application to Reliability Optimization,” Journal of Global Optimization, Vol. 21, No. 2, 2001, pp. 185-199. doi:10.1023/A:1011962605464
  3. Z. Y. Wu, F. S. Bai and L. S. Zhang, “Convexification and Concavification for a General Class of Global Optimization Problems,” Journal of Global Optimization, Vol. 31, No. 1, 2005, pp. 45-60. doi:10.1007/s10898-004-0569-6
  4. D. Li, “Zero Duality Gap for a Class of Nonconvex Optimization Problems,” Journal of Optimization Theory and Applications, Vol. 85, No. 2, 1995, pp. 309-324. doi:10.1007/BF02192229
  5. D. Li and X .L. Sun, “Local Convexification of Lagrangian Function in Nonconvex Optimization,” Journal of Optimization Theory and Applications, Vol. 104, No. 1, 2000, pp. 109-120. doi:10.1023/A:1004628822745
  6. Z. Y. Wu, F. S. Bai and L. S. Zhang, “Convexification and Concavification for a General Class of Global Optimization Problems,” Journal of Global Optimization, Vol. 31, No. 1, 2005, pp. 45-60. doi:10.1007/s10898-004-0569-6
  7. Z. Y. Wu, H. W. J. Lee and X. M. Yang, “A Class of Convexification and Concavification Methods for NonMonotone Optimization Problems,” Optimization, Vol. 54, No. 6, 2005, pp. 605-625. doi:10.1080/02331930500342807
  8. T. Li, Y. J. Wang, Z. Liang and P. M. Pardalos, “Local Saddle Point and a Class of Convexification Methods for Nonconvex Optimization Problems,” Journal of Global Optimization, Vol. 38, No. 3, 2007, pp. 405-419. doi:10.1007/s10898-006-9090-4
  9. H. L. Li, J. F. Tsai and C. A. Floudas, “Convex Underestimation for Posynomial Functions of Positive Variables,” Optimization Letter, Vol. 2, No. 3, 2008, pp. 333-340. doi:10.1007/s11590-007-0061-6
  10. D. Li, “Convexification of Noninferior Frontier,” Journal of Optimization Theory and Applications, Vol. 88, No. 1, 1996, pp. 177-196. doi:10.1007/BF02192028
  11. C. J. Goh and X. Q. Yang, “Convxification of a Noninferior Frontier,” Journal of Optimization Theory and Applications, Vol. 97, No. 3, 1998, pp. 759-768. doi:10.1023/A:1022654528902
  12. D. Li and M. P. Biswal, “Exponential Transformation in Convexifying a Noninferior Frontier and Exponential Generating Method,” Journal of Optimization Theory and Applications, Vol. 99, No. 1, 1998, pp. 183-199. doi:10.1023/A:1021708412776
  13. M. S. Bazaraa, H. D. Sherali and C. M. Shetty, “Nonlinear Programming, Theory and Algorithms,” 2nd Edition, John Wiley & Sons, Inc., New York, 1993.