Open Journal of Optimization
Vol.04 No.01(2015), Article ID:54745,10 pages
10.4236/ojop.2015.41002

A New Filled Function with One Parameter to Solve Global Optimization

Hongwei Lin1, Huirong Li2

1Department of Basic Courses, Jinling Institute of Technology, Nanjing, China

2Department of Mathematic and Computation Application, Shangluo University, Shangluo, China

Email: linhongwei_jt@hotmail.com, lihuirong417@163.com

Copyright © 2015 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY).

http://creativecommons.org/licenses/by/4.0/

Received 26 February 2015; accepted 16 March 2015; published 18 March 2015

ABSTRACT

In this paper, a new filled function with only one parameter is proposed. The main advantages of the new filled function are that it not only can be analyzed easily, but also can be approximated uniformly by a continuously differentiable function. Thus, a minimizer of the proposed filled function can be obtained easily by using a local optimization algorithm. The obtained minimizer is taken as the initial point to minimize the objective function and a better minimizer will be found. By repeating the above processes, we will find a global minimizer at last. The results of numerical experiments show that the new proposed filled function method is effective.

Keywords:

Global Optimization, Filled Function Method, Smoothing Technique, Global Minimize, Local Minimizer

1. Introduction

Global optimization methods have wide applications in many fields, such as engineering, finance, management, decision science and so on. The task of global optimization is to find a solution with the smallest or largest objective function value. In this paper, we mainly discuss the method to find the global minimizer of the objective function. For some problems with only one minimizer, there are many local optimization methods available, for instance, the steepest decent method, the Newton method, the trust region method and so on. However, many problems include multiple local minimizers, and most of the existing methods will not be applicable to these problems.

The difficulty for global optimization is to escape from the current local minimizer to a better one. One of the most efficient methods to deal with this issue is the filled function method which was proposed by Ge [1] [2] . The generic framework of the filled function method can be described as follows:

1) An arbitrary point is taken as an initial point to minimize the objective function by using a local optimi- zation method, and a minimizer of the objective function is obtained.

2) Based on the current minimizer of the objective function, a filled function is designed and a point near the current minimizer is used as an initial point to further minimize the filled function. As a result, a minimizer of the filled function will be found. This minimizer falls into a better region (called basin) of the original objective function.

3) The minimizer of the filled function obtained in 2 is taken as an initial point to minimize the objective function and a better minimizer of the objective function will be found.

4) By repeating steps 2 and 3, the number of the local minimizers will be gradually reduced, and a global minimizer will be found at last.

Although the filled function method is an efficient global optimization method and different filled functions have been proposed, there are some drawbacks for the existing filled functions, such as more than one para- meters to be controlled, being sensitive to the parameters and ill-condition. For example, the filled functions proposed in [1] [2] contain exponent term or logarithm term which will cause ill-condition problem; the filled functions proposed in [3] [4] are non-smooth functions to which the usual classical local optimization methods can not be used; the filled functions proposed in [1] [5] [6] have more than one parameter which is difficult to adjust. To overcome these shortcomings, a new filled function with only one parameter is presented. Although it is not a smooth function, it can be approximated uniformly by a continuously differentiable function. Thus its minimizer can be easily obtained. Based on this new filled function, a new filled function method is proposed.

The remainder of this paper is organized as follows. Related concepts of the filled function method are given in Section 2. In Section 3, a new filled function is proposed and its properties are analyzed. Furthermore, an ap- proximate function of the proposed filled function is given. Finally, the method for avoiding numerical difficulty is presented. In Section 4, a new filled function method is proposed and the numerical experiments on several test problems are made. Finally, some concluding remarks are drawn in Section 5.

2. The Related Concepts

Consider the following global optimization problem with a box constraint:

where is a twice continuously differentiable function on and. Generally, we

assume that has only a finite number of minimizers and the set of the minimizers is denoted as

in (is the number of minimizers of).

Some useful concepts and notations are introduced as follows:

: A local minimizer of on found so far;

: Set;

: Set;

: A constant satisfying;

: A constant satisfying.

Assumption. All of the local minimizers of fall into the interior of.

Definition 1 The basin [7] of at an isolated minimizer is a connected domain which

contains, and in which the steepest descent sequences of starting from any point in con-

verge to, while the minimization sequences of starting from any point outside of doesn’t converge to. Correspondingly, if is an isolated maximizer of, the basin of at is defined as the hill of at.

It is obvious that if, then. If there is another minimizer of and

or, then the basin of at is said to be lower (or higher) than of

at.

The first concept of the filled function was introduced by Ge [1] [2] . Since the concept of the filled function was introduced, different filled functions are given (e.g., [8] [9] ). A new concept of the filled function was presented which is easier to understand in [8] . It can be described as follows:

The first concept of the filled function was introduced in [1] .

Definition 2 A function is said to be a filled function of at, if it satisfies the following properties:

1) is a strict local maximizer of over;

2) has no stationary point in the set;

3) if the set is not empty, then there exists a point such that is a local minimizer of.

Based on definition 2, we present a new filled function with only one parameter in Section 3.

3. A New Filled Function and Its Properties

Assume that a local minimizer of has been found so far. Consider the following function for pro- blem (P):

(1)

where is a parameter.

The following theorems will show that the formula (1) is a filled function which satisfies definition 2.

Theorem 1 Suppose is a local minimizer of and is defined by (1), then is a

strict local maximizer of for all.

Proof. Since is a local minimizer of, there exists a neighborhood of, such that for all. For all, , one has

(2)

Thus, is a strict local maximizer of. □

Theorem 2 Suppose is a local minimizer of, is a point in set, then is not a stationary

point of for all.

Proof. Due to, one has and, so

Namely is not a stationary point of. □

Theorem 3 Suppose is a local minimizer of but not a global minimizer of, which means is not empty, then there exists a point such that is a local minimizer of when.

Proof. Since is a local minimizer of, and is not a global minimizer of, there exists

another local minimizer of such that.

By the definition of and continuity of, there exists a point in rectangular area, such that

(3)

So

(4)

By is a local minimizer of, there exists a point such that ,

Then, there exists a point which is a

minimizer of.

Furthermore, by the Theorem 2, one has. Consequently, Theorem 3 is true. □

From Theorem 1, 2 and 3, we know that if there is a better local minimizer of than, then there exists a point which is minimizer of. It falls into a lower basin. These mean that if one minimizes with initial point, a better minimizer of will be found.

We can find that if is not a global minimizer of the objective function, then is non-diff- erentiable at some point in. Gradient-based algorithms for local optimization cannot be used to obtain the minimizer of. A smoothing technique [10] to approximate is employed here as follows.

Let

(5)

where is a positive parameter. It is obvious that is a differentiable. Further, because

and

we have that the inequality

(6)

holds. From above discussion, we can see that uniformly converges to as tends

to infinity. Therefore, by selecting a sufficiently large, the minimization of can be replaced by

(7)

In order to obtain a more precise minimizer of by solving, in should be large enough. However, if the value of p is too large, it will cause the overflow of the function values. To prevent the occurrence of this situation, a shrinkage factor r is introduced to as follows.

First of all, it is necessary to estimate and give a fixed and sufficiently large

(e.g., take it as) which guarantees that the accurately approximates to.

Then, in order to prevent the difficulty of numerical computation, a large can be taken. Finally,

can be rewritten as

(8)

By doing so, the existing shortcomings can be overcome.

4. A New Filled Function Algorithm and Numerical Experiments

4.1. A New Filled Function Algorithm

Based on the theorems and discussions in the previous section, a new filled function algorithm for finding a global minimizer of will be proposed, and then some explanations on the algorithm will be given. The details are as follows.

Step 1 (Initialization). Choose the initial values (e.g.,), a shrinkage factor (e.g.,), a lower bound of A (denote it as Lba), sufficiently large p and. Some directions

are also given in advance, where and, , is

the dimension of the optimization problems. Set.

Step 2 Minimize starting from an initial point and obtain a minimizer of.

Step 3 Construct

Set.

Step 4 If, then set and go to Step 5; otherwise, go to Step 6.

Step 5 Use as an initial point for minimization of, if the minimization sequences of go out of, set and go back to Step 4; Otherwise, a minimizer of will be found in and set, , and go back to Step 2.

Step 6 If, the algorithm stops and is taken as the global minimizer of; Otherwise, decrease by setting, go to Step 3;

Before we go to the experiments, we have to give some explanations on the above filled function algorithm.

1) In minimization of and, we need to select a local optimization method first. In the pro- posed algorithm, the trust region method is employed.

2) In Step 4, the smaller is needed to select accurately, in our algorithm, the is selected to guarantee

is greater than a threshold (e.g., take the threshold as).

3) Step 5 means that if a local minimizer of is found in and with, then a

better local minimizer of will be obtained by using as the initial point to minimize.

4.2. Numerical Experiment

In this section, the proposed algorithm is tested on some benchmark problems taken from some literatures.

Problem 1. (Two-dimensional function)

where. The global minimum solution satisfies for all.

Problem 2. (Three-hump back camel function)

The global minimum solution is and.

Problem 3. (Six-hump back camel function)

The global minimum solution is or, and .

Problem 4. (Treccani function)

The global minimum solution are and and.

Problem 5. (Goldstein and Price function function)

where, and

. The global minimum solution is

and.

Problem 6. (Two-dimensional Shubert function)

This function has 760 minimizers in total. The global minimum value is.

Problem 7. (Hartman function)

where is the th element of vector, and are the elements at the th row and the th column of matrices and, respectively.

for,

The known global minimizer is so far.

For,

The known global minimizer is so far.

Enerally, in order to illustrate the performance of the filled function method, it is necessary to record the total number of function evaluations of and until the algorithm terminates. The numerical results of the proposed algorithm are summarized in Table 1 for the above 7 problems.

Additionally, the proposed algorithm is compared with the algorithm presented in [11] . A series of minimizers obtained by the above two algorithms are recorded in Tables 2-14 for all testing problems.

Some symbols used in the following tables are given firstly.

: The initial point which satisfies.

: An approximate global minimizer obtained by the proposed algorithm.

: The total number of function evaluations of and until the algorithm terminates.

The initial value of is taken as 1 for all problems.

: The iteration number in finding the th local minimizer of the objective function;

: The th local minimizer;

: The function value of

: The algorithm proposed in this paper;

Table 1. Numerical results of all testing problems.

Table 2. Computational results for problem 1 with.

Table 3. Computational results for problem 1 with.

Table 4. Computational results for problem 1 with.

Table 5. Computational results for problem 2 with initial point.

Table 6. Computational results for problem 2 with initial point.

Table 7. Computational results for problem 3 with initial point.

: The algorithm proposed in reference [11] .

According to the Tables 2-13, we will find that our algorithm is effective, and it is affected by the initial va- lue of and the selection of. The larger initial value of, the less local minimizer will be found and

Table 8. Computational results for problem 3 with initial point.

Table 9. Computational results for problem 3 with initial point.

Table 10. Computational results for problem 4.

Table 11. Computational results for problem 5.

Table 12. Computational results for problem 6.

Table 13. Computational results for problem 7.

Table 14. Computational results for problem 7 with.

also the lower computation cost will be; meanwhile, if the function value of the current local minimizer is closed to that of the global minimizer, then the sufficiently small is necessary, while a relatively large initial value of will cause increasing of number of iterations. Therefore, the initial value of and are needed to be selected accurately. The selection of ensure the accuracy of the global minimizer, so that the sufficiently small and appropriate small initial should be selected or contained in the algorithm is taken as small as possible.

5. Concluding Remarks

The filled function method is a kind of efficient approaches for the global optimization. The existing filled func- tions have some drawbacks, for example, some are non-differentiable functions, some contain more than one adjust parameter and some contain ill-condition terms and so on. These drawbacks may result in failure or diff- iculty of the algorithm in searching global optimal solution. In order to overcome these shortcomings, a new filled function with only one parameter is proposed in this paper. Although the proposed filled function is non- differentiable at some points, it can be approximated uniformly by a continuous differentiable function. The inherent shortcomings of the approximate function can be eliminated by simple treatment. The effectiveness of the new filled function method is demonstrated by numerical experiments on some testing optimization pro- blems.

Acknowledgements

This work was supported by the National Nature Science Foundation of China (No. 11161001, No. 61203372, No. 61402350, No. 61375121), The Research Foundation of Jinling Institute of Technology (No. jit-b-201314 and No. jit-n-201309) and Ningxia Foundation for key disciplines of Computational Mathematics.

References

  1. Ge, R.P. (1990) A Filled Function Method for Finding a Global Minimizer of a Function of Several Variables. Mathematical Programming, 46, 191-204. http://dx.doi.org/10.1007/BF01585737
  2. Ge, R.P. (1987) The Theory of Filled Function Method for Finding Global Minimizers of Nonlinearly Constrained Minimization Problems. Journal of Computational Mathematics, 5, 1-9.
  3. Wang, X.L. and Zhou, G.B. (2006) A New Filled Function for Unconstrained Global Optimization. Applied Mathematics and Computation, 174, 419-429. http://dx.doi.org/10.1016/j.amc.2005.05.009
  4. Wang, Y.-J. and Zhang, J.-S. (2008) A New Constructing Auxiliary Function Method for Global Optimization. Mathematical Computer Modeling, 47, 1396-1410. http://dx.doi.org/10.1016/j.mcm.2007.08.007
  5. Liu, X. and Xu, W. (2004) A New Filled Function Applied to Global Optimization. Computer and Operation Research, 31, 61-80. http://dx.doi.org/10.1016/S0305-0548(02)00154-5
  6. Liu, X. (2004) The Barrier Attribute of Filled Functions. Applied Mathematics and Computation, 149, 641-649. http://dx.doi.org/10.1016/S0096-3003(03)00168-1
  7. Dixon, L.C.W., Gomulka, J. and Herson, S.E. (1976) Reflections on Global Optimization Problem, Optimization in Action. Academic Press, New York, 398-435.
  8. Liang, Y.M., Zhang, L.S., Li, M.M. and Han, B.S. (2007) A Filled Function Method for Global Optimization. Journal of Computational and Applied Mathematics, 205, 16-31. http://dx.doi.org/10.1016/j.cam.2006.04.038
  9. Ling, A.-F., Xu, C.-X. and Xua, F.-M. (2008) A Discrete Filled Function Algorithm for Approximate Global Solutions of Max-Cut Problems. Journal of Computational and Applied Mathematics, 220, 643-660. http://dx.doi.org/10.1016/j.cam.2007.09.012
  10. Polak, E., Royset, J.O. and Womersley, R.S. (2003) Algorithms with Adaptive Smoothing for Finite Minimax Problems. Journal of Optimization Theory and Applications, 119, 459-484. http://dx.doi.org/10.1023/B:JOTA.0000006685.60019.3e
  11. Zhang, L.-S., Ng, C.-K., Li, D. and Tian, W.-W. (2004) A New Filled Function Method for Global Optimization. Journal of Global Optimization, 28, 17-43. http://dx.doi.org/10.1023/B:JOGO.0000006653.60256.f6