Open Access Library Journal
Vol.04 No.05(2017), Article ID:76017,18 pages
10.4236/oalib.1103511

Global Optimization of Multivariate Holderian Functions Using Overestimators

Amine Yahyaoui1, Hamadi Ammar2

1Faculty of Sciences of Bizerte, Carthage University, Tunis, Tunisia

2Faculty of Economics and Management of Nabeul, Carthage University, Tunis, Tunisia

Copyright © 2017 by authors and Open Access Library Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: March 9, 2017; Accepted: May 2, 2017; Published: May 5, 2017

ABSTRACT

This paper deals with the global optimization of several variables Holderian functions. An algorithm using a sequence of overestimators of a single variable objective function was developed converging to the maximum. Then by the use of α-dense curves, we show how to implement this algorithm in a multidimensional optimization problem. Finally, we validate the algorithm by testing it on some test functions.

Subject Areas:

Numerical Mathematics, Operational Research

Keywords:

Global Optimization, Branch and Bound, Holderian Functions, Alienor Method

1. Introduction

When modeling economic, biologic, …, systems, we often meet situations where we are led to minimize or maximize objective multivariate functions [1] . Generally, we are seeking global optimums. It’s well known that global optimization algorithms are scare, when compared to the local optimization ones [2] , and when they exist, their implementation is not so obvious. This difficulty increases when the number of the decision variables gets higher.

In this paper, the objective function is deterministic and available and the variables are bounded but the derivative information is either unavailable or its manipulation is expensive.

When information derivative is not required, many authors have used the regularity of the objective function to elaborate algorithms giving the optimum [3] [4] .

Shubert [5] , Ammar and Cherruault [6] [7] , Evtushenko Ya. G., Malkova V. U. and Stanevichyus A. A. [8] , Gergel V. P. and Sergeyev Ya. D. [9] , Sergeyev Y. D. and Kvasov D. E. [10] considered the case where the objective function is lipschitzian. They developed methods generating sequences converging to the optimum. Other authors, Gourdin E. Jaumard B. and Ellaia R. [11] , Lera D. and Sergeyev Ya. D. [12] , Rahal M. and Ziadi A. [13] , processed the case of holderian functions by trying to elaborate a sequence to converge to the optimum; except that, here, obtaining a sequence, to converge to the optimum, is not so obvious.

In this paper, we are also interested in holderian objective functions. We will develop a technique to solve a multidimensional optimization problem.

In the first part of this paper we define a sequence of overestimators of a single variable function. Then we describe a global optimization algorithm suitable to such functions converging to the global maximum. Then after, we show how we can give an approximating value of the maximum of a several-variables holderian function. To do this, we introduce, in the second part, the Lissajous α-dense curve: the tool that allows to go from a multidimensional optimization problem to a single dimensional one. We end this paper by validating our algorithm testing it on some test functions [14] .

2. Optimization of a Single Variable Hoderian Function

Let’s consider a single variable holderian function f defined on an interval

[ a , b ] .

Let’s denote by (P) the following unidimensional optimization problem:

(P) { Maximize f ( x ) x [ a , b ]

In fact, we will not search the exact solution x o p t of this problem, we just want to have its approximated value. To achieve this, we will develop a global optimization algorithm suited to holderian functions, that will give an approximation x * such that | f ( x o p t ) f ( x * ) | ε 0 where ε 0 > 0 , is the required accuracy a priori chosen. This algorithm is based on a sequence of overestimators.

2.1. Overestimator of a Holderian Function

Definition 1. A real multivariate function f is said to be holderian on a set X n , if there exists k > 0 and β > 1 such that x X and y X :

| f ( x ) f ( y ) | k | x y | 1 β .

Definition 2. A function F is said to be an overestimator of a function f on a set X if:

x X , F ( x ) f ( x ) .

Proposition 1. Let f be a holderian univariate function defined on the interval [ a , b ] and let y [ a , b ] . The function H defined on [ a , b ] by: x [ a , b ]

H ( x ) = f ( y ) + k | x y | 1 β .

is an overestimator of f on [ a , b ] .

Proof. Let’s set y [ a , b ] , As f is holderian: x [ a , b ]

| f ( x ) f ( y ) | k | x y | 1 β .

This yields: f ( x ) f ( y ) k | x y | 1 β .

Hence, f ( x ) f ( y ) + k | x y | 1 β = H ( x ) .

2.2. Sequence of Overestimators

Let x 0 = a the left bound of [a, b] and let’s set:

F 0 ( x ) = f ( x 0 ) + k | x x 0 | 1 β = G 0 ( x )

an overestimator of f whose representative curve is given by Figure 1.

The curve has one vertex V 1 ( u 1 = b , H 1 ) such that:

H 1 = f ( x 0 ) + k | b x 0 | 1 β = max x [ a , b ] G 0 ( x )

Let’s set x 1 = arg max ( G 0 ( x ) ) . Here, x 1 = b . From the point that coordinates are ( x 1 , f ( x 1 ) ) , we plot the curve of the overestimator:

F 1 ( x ) = f ( x 1 ) + k | x x 1 | 1 β

as shown in Figure 2. We set:

Figure 1. Curve of F0.

Figure 2. Curve of G1.

G 1 ( x ) = min ( F 1 ( x ) , G 0 ( x ) )

and x 2 = arg max ( G 1 ( x ) ) .

The vertex V 1 is anymore a vertex of the curve of G 1 . It’s replaced by a new vertex V 1 ( x 2 , G 1 ( x 2 ) ) , given by the intersection of the curves of G 1 and F 1 .

The real x 2 is solution of the following equation:

f ( x 0 ) + k | x 2 x 0 | 1 β = f ( x 1 ) + k | x 2 x 1 | 1 β

In general, it is not easy to have the exact value of the solution of the equation above. For this reason, we will introduce an auxiliary function O 1 that allows to give a value nearby to x 2 that we also denote by x 2 .

The point V 1 ( x 2 , G 1 ( x 2 ) ) is between two neighbouring points belonging to the curve of G 1 : one on its left L ( x 0 , f ( x 0 ) ) and one in its right R ( x 1 , f ( x 1 ) ) . We denote by:

M 1 = max ( f ( x 0 ) , f ( x 1 ) )

m 1 = min ( f ( x 0 ) , f ( x 1 ) )

μ 1 = arg max ( f ( x 0 ) , f ( x 1 ) )

ρ 1 = arg min ( f ( x 0 ) , f ( x 1 ) )

According to the Figure 2, and in this case, M 1 = f ( x 1 ) and m 1 = f ( x 0 ) . Let’s set z 1 in [ x 0 , x 1 ] such that: G 1 ( z 1 ) = M 1 and z 1 μ 1 . That yields that:

z 1 = x 0 + ( M 1 m 1 k ) β

From the point L 1 ( z 1 , M 1 ) , we plot the representative curve of:

O 1 ( x ) = min ( M 1 + k | x x 1 | 1 β , M 1 + k | x μ 1 | 1 β ) 1 J 1 ( x )

where J 1 = [ min ( z 1 , μ 1 ) , max ( z 1 , μ 1 ) ] .

The new function:

G 1 ( x ) = G 1 ( x ) 1 x J 1 + O 1 ( x )

is also an overestimator.

The curve of G 1 has a new vertex, given by the curve of O 1 , denoted by:

V ( u 1 , H 1 ) such that: { H 1 = M 1 + k | u 1 z 1 | 1 β u 1 = z 1 + x 1 2 as indicated in Figure 3.

Hence, the vertex V 1 will be replaced by V . The new vertex of the curve of G 1 , now denoted by V 1 , will be identified by ( u 1 , L 1 , R 1 , H 1 ) with

H 1 = M 1 + k | u 1 z 1 | 1 β and where L 1 ( z 1 , M 1 ) and R 1 ( x 1 , M 1 ) are, respectively,

the left and the right neighbours of V 1 .

Let set x 2 = arg max ( G 1 ( x ) ) . Here, x 2 = u 1 . Let:

F 2 ( x ) = f ( x 2 ) + k | x x 2 | 1 β

G 2 ( x ) = min ( F 2 ( x ) , G 1 ( x ) )

Suppose f evaluated at x 0 , x 1 , , x n and denote by:

φ n = max ( f ( x 0 ) , f ( x 1 ) , , f ( x n ) )

The curve of G n has n vertexes: V 0 V 1 V n such that each of them is identified by:

{ itsleftneighbour L i ( l i , M i ) itsrightneighbour R i ( r i , M i ) itsabsciss u i = l i + r i 2 itsordinate H i = M i + k | u i r i | 1 β for i from 1 to n. (1)

Let’s x n + 1 = arg max ( G n ( x ) ) = u n , y n + 1 = f ( x n + 1 ) , φ n + 1 = max ( φ n , f ( x n + 1 ) ) and:

F n + 1 ( x ) = f ( x n + 1 ) + k | x x n + 1 | 1 β

G n + 1 ( x ) = min ( G n ( x ) , F n + 1 ( x ) )

For the curve of the overestimator G n + 1 , V n is anymore a summit, but two new vertexes appear from either side of V n : denoted by V L (in the left) and V R (in the right), as indicated in Figure 4. Set:

Figure 3. Curve of G 1 .

Figure 4. Curve of Gn+1.

M n + 1 = max ( f ( x n + 1 ) , M n )

m n + 1 = min ( f ( x n + 1 ) , M n )

For the both vertexes V L and V R , it is not obvious to calculate their coordinates. Each of them will be replaced, respectively, by V L and V R as proceeded for G 1 .

Let’s determinate the coordinates of vertex V L .

Set μ n + 1 L = arg ( M n + 1 ) and ρ n + 1 L = arg ( m n + 1 ) which belong to the set

{ x n + 1 , l n } where l n is the absciss of the left neighbour L n of V n , as mentioned in (1).

Let’s set z L in [ x n + 1 , μ n + 1 L ] such that: G n + 1 ( z L ) = M n + 1 and z L μ n + 1 L .

This involves:

z L = ρ n + 1 L + s i g n ( μ n + 1 L ρ n + 1 L ) ( M n + 1 m n + 1 k ) 1 β

where s i g n ( x ) = { + 1 , if x 0 1 , if x < 0 . Let:

O n + 1 L ( x ) = min ( M n + 1 + k | x z L | 1 β , M n + 1 + k | x μ n + 1 L | 1 β ) 1 J n + 1 L ( x )

where J n + 1 L = [ min ( z L , μ n + 1 L ) , max ( z L , μ n + 1 L ) ] .

The part of the curve of G n + 1 relative to the interval

[ min ( z L , μ n + 1 L ) , max ( z L , μ n + 1 L ) ] is replaced by the one of O n + 1 L . That makes appear a new vertex ( V L ) replacing V L such that:

・ Its absciss is u L = 1 2 ( z L + μ n + 1 L )

・ Its ordinate is H L = M n + 1 + k | μ n + 1 L u L | 1 β

Furthermore, V L will be identified by its neighbours:

・ The left neighbour L ( min ( z L , μ n + 1 L ) , M n + 1 )

・ The right neighbour R ( max ( z L , μ n + 1 L ) , M n + 1 )

Those values will be saved in memory.

Similarly, V R will be replaced by V R determined as follows:

Set μ n + 1 R = arg ( M n + 1 ) and ρ n + 1 R = arg ( m n + 1 ) which belong to the set

{ x n + 1 , r n } where r n is the absciss of the right neighbour of V n . Let:

ü z R = ρ n + 1 R + s i g n ( μ n + 1 R ρ n + 1 R ) ( M n + 1 m n + 1 k ) 1 β

ü O n + 1 R ( x ) = min ( M n + 1 + k | x z R | 1 β , M n + 1 + k | x μ n + 1 R | 1 β ) 1 J n + 1 R ( x )

where J n + 1 R = [ min ( z R , μ n + 1 R ) , max ( z R , μ n + 1 R ) ] .

The vertex V R that will replace V R has the following coordinates:

・ Its absciss is u R = 1 2 ( z R + μ n + 1 R )

・ Its ordinate is H R = M n + 1 + k | μ n + 1 R u R | 1 β

V R will also be identified by its neighbours:

・ The left neighbour L ( min ( z R , μ n + 1 R ) , M n + 1 )

・ The right neighbour R ( max ( z R , μ n + 1 R ) , M n + 1 )

Let’s set:

G n + 1 ( x ) = G n + 1 L ( x ) + G n + 1 R ( x ) + G n + 1 ( x ) 1 x J n + 1 ( x )

where J n + 1 = [ min ( z L , μ n + 1 L ) , max ( z R , μ n + 1 R ) ] = J n + 1 L J n + 1 R .

Furthermore, the vertex V n is eliminated and replaced by V R and V L . Hence, we have n + 1 simmits that we organize in an increasing order, that yields:

V 1 V 2 V n + 1

2.3. Convergence Theorem

Theorem 1. Let f a ( C , 1 β ) -holderian function defined on the interval

[ a , b ] . The sequence ( H n ) n , defined above, decreases to the maximum of f .

Proof. Denote by φ = max x [ a , b ] f ( x ) and Φ = { x [ a , b ] ; f ( x ) = φ } ,

x [ a , b ] , G n ( x ) f ( x ) for all n . This involves that:

H n = max x [ a , b ] G n ( x ) max x [ a , b ] ( x ) = φ

As φ n + 1 = max ( f ( x 0 ) , , f ( x n + 1 ) ) and by the construction of M n , we deduce that:

M n + 1 φ n + 1 H n + 1

Hence:

| H n + 1 φ n + 1 | | H n + 1 M n + 1 | k | x L n + 1 x n + 1 | 1 β k | 1 2 ( x L n + 1 x R n + 1 ) | 1 β k | 1 2 2 ( x L n x R n ) | 1 β k | 1 2 n ( x L 1 x R 1 ) | 1 β

which vanishes to 0.

As ( φ n ) is an increasing bounded sequence, it converges. Suppose that φ n converges to μ φ . As f ( [ a , b ] ) is a compact, μ f ( [ a , b ] ) . Let y n [ a , b ] such that φ n = f ( y n ) . As [ a , b ] is a compact, the sequence ( y n ) admits a subsequence ( y n k ) that converges to z in [ a , b ] . The continuity of f involves that f ( z ) = μ .

Let ε = φ μ . Since ( y n k ) converges to z , K , k K , | y n k z | < ( ε 2 C ) β . The property of Holder involves that: | f ( y n k ) f ( z ) | C | y n k z | 1 β ε 2 . This

means that k K :

φ n k = f ( y n k ) f ( z ) + ε 2 = μ + ε 2 .

On the other hand, n n k :

G n ( x ) G n k ( x ) = f ( y n k ) + C | x y n k | 1 β = φ n k + C | x y n k | 1 β .

Hence, x [ a , b ] and n n k such that | x y n k | < ( ε 2 C ) β , we have:

G n ( x ) = φ n k + ε 2 μ + ε 2 + ε 2 = φ

The real z ] y n k ( ε 2 C ) β , y n k + ( ε 2 C ) β [ , then j > K such that:

| y n j y n K | < ( ε 2 C ) β .

This means that H n j = G n j ( y n j ) < φ . This is absurd. Then ( y n k ) converges to φ .

2.4. Description of the Algorithm

2.4.1. Initialization

x 0 = a , x 1 = b , n = 1 .

M 1 = max ( f ( x 0 ) , f ( x 1 ) )

m 1 = min ( f ( x 0 ) , f ( x 1 ) )

μ 1 = arg max ( f ( x 0 ) , f ( x 1 ) )

V 1 ( u 1 = z 1 + x 1 2 , H 1 = M 1 + k | u 1 z 1 | 1 β )

L 1 ( z 1 , M 1 ) and R 1 ( x 1 , M 1 )

2.4.2. Iterative Steps

φ n = max ( f ( x 0 ) , f ( x 1 ) , , f ( x n ) )

x n + 1 = u n

M n + 1 = max ( f ( x n + 1 ) , M n )

m n + 1 = min ( f ( x n + 1 ) , M n )

V L { u L = 1 2 ( z L + μ n + 1 L ) H L = M n + 1 + k | μ n + 1 L u L | L ( min ( z L , μ n + 1 L ) , M n + 1 ) R ( max ( z L , μ n + 1 L ) , M n + 1 )

V R { u R = 1 2 ( z R + μ n + 1 R ) H R = M n + 1 + k | μ n + 1 R u R | L ( min ( z R , μ n + 1 R ) , M n + 1 ) R ( max ( z R , μ n + 1 R ) , M n + 1 )

Organize in an increasing order V 1 , V 2 , , V n 1 , V L , V R : V 1 V 2 V n V n + 1 .

2.4.3. Stopping Criterion

If | H n + 1 φ n + 1 | ε 0 , then stop, else, n = n + 1 and back to iterative steps.

3. α-Dense Curves

The principal tool that enables one to apply the algorithm above for a multivariate function is the α-dense curves [15] [16] [17] .

3.1. The α-Dense Curves

Definition 3. Let X be a non empty set and S a subset of X . S is said to be α-dense in X , if:

M X , M S : d ( M , M ) α

Among the α-dense curves, we have chosen the Lissajous curves.

3.2. Lissajous Curve

In mathematics, a Lissajous curve, also known as Lissajous figure or Bowditch curve, is the graph of a system of parametric equations which describe complex harmonic motion.

3.2.1. Bidimensional Case

In the bidimensional case, a Lissajous figure can be defined by the following parametric equations:

{ x ( t ) = a sin ( t ) y ( t ) = b sin ( n t + ϕ ) where 0 ϕ π 2 and n 1 .

The number n is named the parameter of the curve. If n is rational, it can

be expressed in the form n = p q . Hence, the parametric equation describing the

curve becomes:

{ x ( t ) = a sin ( p t ) y ( t ) = b sin ( q t + ϕ ) 0 t 2 π where: 0 ϕ π 2 p

In what follows, we set ϕ = 0 and let consider the following function Γ defined by:

Γ : [ 0 , 2 π ] [ 0 , 2 π ] × [ 0 , 2 π ] t Γ ( t ) = ( Γ 1 ( t ) , Γ 2 ( t ) ) (2)

where { Γ 1 ( t ) = π sin ( p t ) + π Γ 2 ( t ) = π sin ( q t ) + π such that: p is an even number and q = p + 1 , of which the representative curve is given by Figure 5;

Theorem 2. If α = π sin ( π p ) , the Lissajous curve Γ , given by (2), is α-dense

in [ 0 , 2 π ] 2 .

Proof. Let M 0 ( x , y ) any point in [ 0 , 2 π ] 2 . Let show that there exists t [ 0 , 2 π ] such that:

d ( M 0 , Γ ( t ) ) π sin ( π p )

We Set: { Γ 1 ( t ) = π sin ( p t ) + π Γ 2 ( t ) = π sin ( q t ) + π .

p is an even number and q = p + 1 .

Let’s set t in [ 0 , 2 π ] . Notice that the function Γ 1 is p periodic. Let

t = t + p .

Consider the points M ( Γ 1 ( t ) , Γ 2 ( t ) ) and M ( Γ 1 ( t ) , Γ 2 ( t ) ) . The points M and M have the same abscissa.

d ( M , M ) 2 = ( Γ 2 ( t ) Γ 2 ( t ) ) 2 = π 2 ( sin ( q t ) sin ( q t ) ) 2 = 4 π 2 sin 2 ( q 2 ( t t ) ) cos 2 ( q 2 ( t + t ) ) = 4 π 2 sin 2 ( q π p ) cos 2 ( q t + q π p ) = 4 π 2 sin 2 ( π p ) cos 2 ( q t + π p )

Figure 5. Lissajous curve in the bidimensional case.

This distance reaches its maximum value when cos 2 ( q t + π p ) = 1 , for

t = 1 q ( k π + π p ) where k { 1 , 2 , , 2 q } .

Hence, d ( M , M ) 2 π sin ( π p ) .

As Γ 1 is surjective from [ 0 , p ] on [ 0 , 2 π ] , there exists t 1 [ 0 , 2 π p ] such

that x = Γ 1 ( t 1 ) .

As Γ 2 is surjective from [ 0 , p ] on [ 0 , 2 π ] , there exists t 2 [ 0 , 2 π p ] such

that y = Γ 2 ( t 2 ) .

There exists k { 0 , 1 , , p 1 } such that: either

Γ 2 ( t 1 + 2 k π p ) Γ 2 ( t 2 ) Γ 2 ( t 1 + 2 ( k + 1 ) π p )

or

Γ 2 ( t 1 + 2 k π p ) Γ 2 ( t 2 ) Γ 2 ( t 1 + 2 ( k + 1 ) π p )

This does not occur only when y = 0 or y = 2 π , that means that when the point M in on the boundary.

This yields that M 0 is in the segment:

[ M k ( Γ ( t 1 + 2 k π p ) ) , M k + 1 ( Γ ( t 1 + 2 ( k + 1 ) π p ) ) ] .

So that, any point M 0 can be framed between two points of type M k and M k + 1 .

Hence, we can approximate any point of [ 0 , 2 π ] 2 by a point of the Lissajous curve.

When trying to α-densify [ 0 , 2 π ] 2 using the parametric curve Γ ( t ) , we choose the coefficient p such that:

α = π sin ( π p ) .

Generally, let a > 0 and set a curve h that parametric equation is:

h : [ 0 , 2 π ] [ a , a ] × [ a , a ] t { h 1 ( t ) = a π Γ 1 ( t ) a = a sin ( p t ) h 2 ( t ) = a π Γ 2 ( t ) a = a sin ( q t )

Corollary 1. For α = a sin ( π p ) , any point in [ a , a ] 2 can be approximated,

with a precision α , by at least one point of h .

M [ a , a ] 2 , there exists t [ 0 , 2 π ] such that d ( M , h ( t ) ) α .

3.2.2. Multidimensional Case

ü In dimension two, we defined the Lissajous curve by:

Γ : [ 0 , 2 π ] [ 0 , 2 π ] × [ 0 , 2 π ] t { Γ 1 ( t ) = π sin ( p t ) + π Γ 2 ( t ) = π sin ( q t ) + π

with: p a given even number and q = p + 1 .

ü In dimension three, let’s consider ( x 1 , x 2 , x 3 ) [ 0 , 2 π ] 3 , We first link x 1 and x 2 as done in the bidimensional case: that means:

x 1 = Γ 1 ( t * ) and x 2 = Γ 2 ( t * )

with t * in [ 0 , 2 π ] , then we link t * and x 3 , similarly, by setting:

t * = Γ 1 ( t ) and x 3 = Γ 2 ( t )

with t in [ 0 , 2 π ] . This involves:

{ x 1 = Γ 1 ( t * ) = Γ 1 ( Γ 1 ( t ) ) x 2 = Γ 2 ( t * ) = Γ 2 ( Γ 1 ( t ) ) x 3 = Γ 2 ( t )

Hence, we obtain parametric curve H ( t ) = ( H 1 ( t ) , H 2 ( t ) , H 3 ( t ) ) defined by the following expression:

{ H 1 ( t ) = Γ 1 Γ 1 ( t ) H 2 ( t ) = Γ 2 Γ 1 ( t ) H 3 ( t ) = Γ 2 ( t )

with t [ 0 , 2 π ] .

ü We can generalize this process to n variables ( x 1 , x 2 , , x n ) by linking two by two by the same manner. At the end of the process, we get the new variable t belonging to [ 0 , 2 π ] that all variables will be expressed by:

x i = H i ( t ) , i = 1 , , n

where H i ( t ) are defined as follows:

{ H 1 ( t ) = Γ 1 n 1 ( t ) = Γ 1 Γ 1 Γ 1 n 1 times ( t ) H i ( t ) = Γ 2 Γ 1 n i ( t ) i = 2 , , n

Then, let’s consider the parametric curve H defined by:

H : [ 0 , 2 π ] [ 0 , 2 π ] n t ( H 1 ( t ) , H 2 ( t ) , , H n ( t ) )

Theorem 3. Let α = π sin ( π p ) .

The parametric curve defined by H ( t ) = ( H 1 ( t ) , H 2 ( t ) , , H n ( t ) ) such that:

{ H 1 ( t ) = Γ 1 n 1 ( t ) H i ( t ) = Γ 2 Γ 1 n i ( t ) i = 2 , , n

for t [ 0 , 2 π ] is α-dense on [ 0 , 2 π ] n .

Proof. Let H ( t ) and H ( t + 2 π p ) two points of the curve H . As the function Γ 1 is 2 π p periodic, the ( n 1 ) first coordinates of these two points are

equal. As proceeded in the second part of the proof of the previous theorem, we show that:

d ( H ( t ) , H ( t + 2 π p ) ) 2 π sin ( π p )

Therefore, any point M 0 ( x 1 , x 2 , , x n ) can be framed between two points of the curve of type:

H ( t + 2 k π p ) and H ( t + 2 ( k + 1 ) π p ) where t [ 0 , 2 π ] .

Generally, let a > 0 and set a curve h that parametric equation is:

h : [ 0 , 2 π ] [ a , a ] n t ( h 1 ( t ) , h 2 ( t ) , , h n ( t ) )

{ h 1 ( t ) = a π Γ 1 n 1 ( t ) a h j ( t ) = a π Γ 2 Γ 1 n j ( t ) a for j = 2 , , n

Corollary 2. For α = a sin ( π p ) , any point in [ a , a ] n can be approximated,

with a precision α, by at least one point of the parametric curve given by h .

M [ a , a ] n , there exists t [ 0 , 2 π ] such that d ( M , h ( t ) ) α .

4. Optimization of a Multivariate Holderian Function

Let f be a multivariate holderian function with constants of Holder are: k > 0 and β > 1 .

Let us consider the following multidimensional optimization problem:

( P n ) Min x [ a , a ] n f ( x )

In fact, we don’t look for the exact value of the minimum value of f , we’d just want an approximating value of that minimum value with a given accuracy ε .

By means of an α-dense Lissajous curve on [ a , a ] n , we convert the initial multidimensional problem ( P n ) into an unidimensional one as follows:

( P 1 ) Min t [ 0 , 2 π ] f * ( t )

where: f * = f h , the single variable function approximating the multivariate function f . (where ( h 1 , h 2 , , h n ) defined above)

Proposition 2. If f is ( k , 1 β ) -holderian and h i = 1 , n is ( k i , 1 β ) -holderian, then f * = f h is holderian where the constant is “ k ( i = 1 n k i 2 ) 1 2 β ” and the exponent is “ 1 β β ”.

Proof. | f * ( x ) f * ( y ) | = | f h ( x ) f h ( y ) | = | f ( h ( x ) ) f ( h ( y ) ) | k h ( x ) h ( y ) 1 β k ( i = 1 n ( h i ( x ) h i ( y ) ) 2 ) 1 β k ( i = 1 n ( k i | x y | 1 β ) 2 ) 1 β k ( i = 1 n k i 2 | x y | 1 β ) 1 β k ( i = 1 n k i 2 ) 1 2 β | x y | 1 β β , t [ 0 , 2 π ]

Let x o p t = arg min ( f ) and t o p t = arg min ( f * ) .

Theorem 4. If α = ( ε k ) β then | f ( x o p t ) f * ( t o p t ) | ε .

Remark 1. The knowledge of the minimum of f * allows us to surround the minimum value of f in the interval [ f * ( t o p t ) ε , f * ( t o p t ) + ε ] .

Proof. We set: x o p t = arg min x [ a , a ] n f ( x )

As x o p t [ a , a ] n , the α-density guarantees the existence of t * [ 0 , 2 π ] such that | x o p t h ( t * ) | α

| f ( x o p t ) f * ( t * ) | = | f ( x o p t ) f ( h ( t * ) ) | k x o p t h ( t * ) 1 β k α 1 β = ε

Hence, if we want to estimate the optimum with an accuracy ε , we just have

to take α = ( ε k ) β .

Suppose that there exists x 0 [ a , a ] n such that:

f ( x 0 ) < f * ( t o p t ) ε

So that:

f ( x 0 ) + ε < f * ( t o p t ) (*)

The α-density involves that there exists t 0 [ 0 , 2 π ] such that x 0 h ( t 0 ) < α

| f ( x 0 ) f ( h ( t 0 ) ) | k x 0 h ( t 0 ) 1 β ε

f ( x 0 ) ε f ( h ( t 0 ) ) f ( x 0 ) + ε

Considering (*) involves:

f * ( t 0 ) = f ( h ( t 0 ) ) f ( x 0 ) + ε < f * ( t o p t )

This is absurd.

5. Numerical Tests (Figures 6-9)

1) f 1 ( x ) = 1 x 2 , x [ 0.25 , 0.5 ]

The holderian constants are = 2 , β = 2 . The accuracy is: ε = 10 5 .

The result is:

{ x * = 0.5 f 1 ( x * ) = 0.866 f 1 o p t [ f 1 ( x * ) ε , f 1 ( x * ) ]

2) f 2 ( x ) = k = 1 5 k | sin ( ( 3 k + 1 ) x + k ) | | x k | 1 5 , x [ 0 , 10 ]

k = 77 , β = 5 , ε = 0.003

{ x * = 2.829917922 x * = 2.829917922

Figure 6. Curve of f2.

Figure 7. Curve of f5.

Figure 8. Curve of f * = f h .

Figure 9. Curve of f1, f2 and fa.

3) f 3 ( x , y ) = | x + y 0.25 | 2 3 3 cos ( x 2 ) , ( x , y ) [ 1 2 , 1 2 ] 2

k = 2.42 , β = 3 2 , ε = 0.01

{ x * = ( 0.004 , 0.253 ) f 3 ( x * ) = 2.99 f 3 o p t [ f 3 ( x * ) ε , f 3 ( x * ) ]

4) f 4 ( x , y ) = k = 1 3 1 k | cos ( ( 3 k + 1 ) ( x + 5 ) + 1 k ) | | x y | 1 3 , ( x , y ) [ 5 , 5 ] 2

k = 14.77 , β = 3 , ε = 0.1

{ x * = ( 4.499796 , 4.500100 ) f 4 ( x * ) = 0.067788

5) f 5 ( x , y ) = cos x cos y exp ( 1 x 2 + y 2 π ) , ( x , y ) [ 6 , 6 ] 2

k = 45.265 , β = 1 2 , ε = 0.03

{ x * = ( 0.023391875 , 0.01321677 ) f 5 ( x * ) = 2.694161027

6) f 6 ( x 1 , x 2 , x 3 ) = 1 2 i = 1 3 ( x i 4 16 x i 2 + 5 x i ) , ( x 1 , x 2 , x 3 ) [ 5 , 5 ] 3

k = 180 , β = 7 , ε = 0.02

{ x * = ( 2.899891 , 3.000102 , 2.923504 ) f 6 ( x * ) = 117.3248028

7) Let the following functions test. In [10] , RPS method was used to optimize them.

{ f 1 ( x 1 , x 2 ) = 4 | sin ( x 1 ) cos ( x 2 ) exp ( | cos ( x 1 2 + x 2 2 200 ) | ) | , ( x 1 , x 2 ) [ 10 , 10 ] 2 f 2 ( x 1 , x 2 ) = | cos ( x 1 ) cos ( x 2 ) exp ( | 1 ( x 1 2 + x 2 2 π ) | ) | , ( x 1 , x 2 ) [ 10 , 10 ] 2 f 3 ( x 1 , x 2 ) = exp ( | cos ( x 1 ) cos ( x 2 ) exp ( | 1 ( x 1 2 + x 2 2 π ) | ) | 1 ) , ( x 1 , x 2 ) [ 11 , 11 ] 2

In what follows, we compare our method with the RPS one (The Particle Swarm Method of Global Optimization)

Cite this paper

Yahyaoui, A. and Ammar, H. (2017) Global Optimization of Multivariate Holderian Functions Using Overestimators. Open Access Library Jour- nal, 4: e3511. https://doi.org/10.4236/oalib.1103511

References

  1. 1. Ammar, H., Abbaoui, K. and Ndour, M. (1996) An Example of an Interaction Model between Two Species. Kybernetes, 25, 106-118.

  2. 2. Miguel, R.L. and Sahinidis, N.V. (2013) Derivative-Free Optimization: A Review of Algorithms and Comparison of Software Implementations. Journal of Global Optimization, 56, 1247-1293.
    https://doi.org/10.1007/s10898-012-9951-y

  3. 3. Lavigne, D. and Cherruault, Y. (1991) Alienor-Gabriel Global Optimization of a Function of Several Variables. Mathematical and Computer Modelling, 15, 125-134.
    https://doi.org/10.1016/0895-7177(91)90097-Q

  4. 4. Mora, G., Cherruault, Y. and Ziadi, A. (2001) Global Optimization. A New Variant of the Alienor Method. Computers and Mathematics with Applications, 41, 63-71.
    https://doi.org/10.1016/S0898-1221(01)85006-9

  5. 5. Shubert, B.O. (1972) A Sequential Method Seeking the Global Maximum of a Function. SIAM Journal on Numerical Analysis, 9, 379-388.
    https://doi.org/10.1137/0709036

  6. 6. Ammar, H. and Cherruault, Y. (1993) Approximation of Several Variables Function by a One Variable Function and Application to Global Optimization. Mathematical and Computer Modelling, 18, 17-21.
    https://doi.org/10.1016/0895-7177(93)90003-H

  7. 7. Ammar, H. and Cherruault, Y. (1995) Implementation of Alienor Technique in the Multidimensional Bissection Method. Application to Global Optimization. A New Accelerated Algorithm. Kybernetes, 24, 31-40.
    https://doi.org/10.1108/03684929510147272

  8. 8. Evtushenko, Ya.G., Malkova, V.U. and Stanevichyus, A.A. (2009) Parallel Global Optimization of Function of Several Variables. Computational Mathematics and Mathematical Physics, 49, 246-260.
    https://doi.org/10.1134/S0965542509020055

  9. 9. Gerge, V.P. and Sergeyev, Ya.D. (1999) Sequential and Parallel Algorithms for Global Minimizing Functions with Lipschitzian Derivatives. Computers and Mathematics with Applications, 37, 163-179.
    https://doi.org/10.1016/S0898-1221(99)00067-X

  10. 10. Sergeyev, Y.D. and Kvasov, D.E. (2013) Lipschitz Global Optimization Methods in Control Problems. Automation and Remote Control, 74, 1435-1448.
    https://doi.org/10.1134/S0005117913090014

  11. 11. Gourdin, E., Jaumard, B. and Ellaia, R. (1996) Global Optimization of Hölder Functions. Journal of Global Optimization, 8, 323-348.
    https://doi.org/10.1007/BF02403997

  12. 12. Lera, D. and Sergeyev, Ya.D. (2002) Global Minimization Algorithms for Hölder Functions. BIT Numerical Mathematics, 42, 119-133.
    https://doi.org/10.1023/A:1021926320198

  13. 13. Rahal, M. and Ziadi, A. (2008) A New Extension of Piyavskii's Method to Holder Functions of Several Variables. Applied Mathematics and Computation, 197, 478-488.
    https://doi.org/10.1016/j.amc.2007.07.067

  14. 14. Mishra, S.K. (2007) Some New Test Functions for Global Optimization and Performance of Repulsive Particle Swarm Method. University Library of Munich, Germany, MPRA Paper 2718.

  15. 15. Mora, G. and Cherruault, Y. (1997) Characterization and Generation of α-Dense Curve. Computers & Mathematics with Applications, 33, 83-91.
    https://doi.org/10.1016/S0898-1221(97)00067-9

  16. 16. Mora, G., Cherruault, Y. and Ziadi, A. (2000) Functional Equations Generating Space-Densifying Curves. Computers and Mathematics with Applications, 39, 45-55.
    https://doi.org/10.1016/S0898-1221(00)00085-7

  17. 17. Sergeyev, Y.D., Strongin, R.G. and Lera, D. (2013) Introduction to Global Optimization Exploiting Space-Filling Curves. Springer Briefs in Optimization.
    https://doi.org/10.1007/978-1-4614-8042-6