d has been found, i.e., when the error is not larger than ε. The number of iterations needed to satisfy each of these conditions is studied below.
4. Convergence of the Algorithm
We now study the error and the gap of Φn in modified Piyavskii’s algorithm as a function of the number n of iterations. The following proposition shows how they decrease when the number of iterations increases and provides also a relationship between them.
Proposition 2. Let and denote respectively the error and the gap after n iterations of modified Piyavskii's algorithm. Then, for 1) 2) 3)
Proof. First, notice that and are non increasing sequences and is nondecreasing. After n iterations of the modified Piyavskii's algorithm, the upper bounding piece-wise concave function contains partial upper bounding concave functions, which we call first generation function. We call the partial upper bounding concave functions obtained by splitting these second generation function. must belong to the second generation for the first time after no more than another iterations of the modified Piyavskii’s algorithm, at iteration m (m ≤ 2n − 1).
Let k be the iteration after which the maximum of has been obtained, by splitting the maximum of Then, in the case where
is in and from proposition 1, we get:
and is one of the endpoints of the interval over which the function is defined.
1) We have
If therefore, we have
If we will consider two cases:
• case 1
If then and if then and the algorithm stops.
• case 2
then we have
We deduce from these two inequalities that
2) We have
We follow the same steps to prove the second point of 1) and show that
• case 1
If or then and the algorithm stops.
• case 2 First, we have
This proves the proposition.
Proposition 3. Modified Piyavskii’s algorithm (with) is convergent, i.e. either it terminates in a finite number of iterations or
Proof. This is an immediate consequence of the definition of and i) of proposition 2.
5. Reference Algorithm
Since the number of necessary function evaluations for solving problem (P) measures the efficiency of a method, Danilin  and Hansen, Jaumard and Lu  suggested studying the minimum number of evaluation points required to obtain a guaranteed solution of problem (P) where the functions involved are Lipschitz-Continuous on
In the same way, we propose to study the minimum number of evaluation points required to obtain a guaranteed solution of problem (P) where the functions involved satisfy the condition (1).
For a given tolerance and constant M this can be done by constructing a reference bounding piece-wise concave function such that
with a smallest possible number of evaluations points in
Such a reference bounding piece-wise concave function is constructed with which is assumed to be known. It is of course, designed not to solve problem (P) from the outset, but rather to give a reference number of necessary evaluation points in order to study the efficiency of other algorithms. It is easy to see that a reference bounding function can be obtained in the following way:
Description of the Reference Algorithm
• root of the equation
2). Reference bounding function
• root of the equation
3). Minimum number of evaluations points
A reference bounding function is then defined as follows (see Figure 2)
6. Estimation of the Efficiency of the Modified Piyavskii’s Algorithm
Proposition 4. For a given a tolerance let be a reference bounding function with evaluations points and Let be a bounding function obtained by the modified Piyavskii’s algorithm with evaluation points where nC is the smallest number of iterations such that
Then we have
and the bound is sharp for all values of
Proof. The proof is by induction on We initialize the induction at
• Case 1 (see Figure 3)
We have to show that We have
Figure 2. Reference bounding function.
Figure 3. Case y1 = b.
hence the result holds.
• case 2 y1 < b. (see Figure 4)
If nc = 2, (4) holds. In this case, we have
Assuming that and ( the proof is similar for). We denote by and the partial bounding functions defined respectively on and and by a maximum value of We show that
and that Indeed, we have
is in then
Figure 4. Case y1 < b.
(see the proof of proposition 2), we have
If ≤ x3 then and (4) holds.
Assume now that (4) holds for and consider nref = k + 1. The relation (4) holds for nc = 2, therefore we may assume that
If we let
the modified Piyavskii’s algorithm has to be implemented on two subintervals [x1, x3] and [x3, x2] There are two cases which need to be discussed separately:
• case 1 (see Figure 5)
There is a subinterval containing all the nref evaluation points of the best bounding function Without loss of generality, we may assume that contains all nref evaluation points of. Let and be the solution of By symmetry, we have
Let be the abscissa of the maximum of the last partial bounding function of Due to our assumption, we have
since otherwise there is an another evaluation point of f on the interval
Let be the abscissa of the maximum of the partial bounding function of preceding In this case, we have
But implies that nref = 1, which is not the case according to the assumption (nref ≥ 2). Therefore, this case will never exist.
Figure 5. The subinterval contains all the nref evaluation points.
• case 2 None of the subintervals contains all the nref evaluation points of Φref(x). We can then apply induction, reasoning on the subintervals [x1, x3] and [x3, x2] obtained by modified Piyavskii’s algorithm. If they contain and evaluation points respectively, [x1, x2] contains evaluation points as x3 belongs to both subintervals. Then (4) follows by induction on nref.
Consider the function f(x) = x defined on [0, 1], the constant M is equal to 4. For ε = 0 we have nC = nref + 1 (see Figure 6), and the bound (4) is sharp.
As noticed in remark 3, the modified Piyavskii’s algorithm does not necessarily stop as soon as a cover is found as described in proposition 4, but only when the error does not exceed ε. We now study the number of evaluation points necessary for this to happen.
Proposition 5. For a given tolerance ε, let nref be the number of evaluation points in a reference bounding function of f and nB the number of iterations after which the modified Piyavskii’s algorithm stops. We then have
Proof. Let Due to proposition 4, after iterations, we have Due to 3) of proposition 2, after iterations, we have
which shows that the termination of modified Piyavskii’s algorithm is satisfied. This proves (5).
7. Bounds on the Number of Evaluation Points
In the previous section, we compared the number of evaluation functions in the modified Piyavskii’s algorithm and in the reference algorithm. We now evaluate the number nB of evaluation functions of the modified Piyavskii’s algorithm itself. To achieve this, we derive bounds on nref, from which bounds on nB are readily obtained using the relation nref ≤ nB ≤ 4nref + 1.
Proposition 6. Let f be a C2 function defined on the interval [a, b]. Set
Then the number nref of evaluation points in a reference cover Φref using the constant M1 ≥ M2 is bounded by the following inequalities:
Figure 6. None of the subintervals contains all the nref evaluation points.
Proof. We suppose that the reference cover Φref has nref − 1 partial upper bounding functions defined by the evaluation points
We consider an arbitrary partial upper bounding function and the corresponding subinterval [yi, yi + 1] fori i = 1, ···, nref − 1. To simplify, we move the origin to the point (yi, f(yi)). Let d = yi + 1 − yi , z = f(yi + 1) − f(yi) and h = f* + ε. We assume z ≥ 0 (See Figure 7).
Let be the partial upper bounding function defined on [yi, yi + 1] and the point where
is reached, then
We deduce that
Let g1 be the function defined on [yi, yi + 1] by:
Thus we have
Consider the function
Figure 7. The reference cover Φref has nref − 1 partial upper bounding functions.
Let x1 and x2 be the roots of the equation F1(x) = 0; they are given by
Then F1 is written in the following way
Since θ ≥ 0 and g1 reaches its minimum at the point θ, then
This proves (6).
Now let us consider the function G defined and continuous on [yi, yi + 1] such that
Two cases are considered case 1: If
then, the function G is given by
its derivative is
Thus F2 is expressed as follows
Moreover, the inequality
implies (8), this proves the first inequality of (7).
Case 2: Suppose that:
Let X1 et X2 be the roots of the equation
X1 and X2 are given by
In this case, G is given by
Moreover, M0 = M1 implies the condition (9) and we have
Table 1. Computational results with .
Table 2. The number of function evaluations of modified Piyavskii's algorithm for different values of ε and M.
8. Computational Experiences
In this section, we report the results of computational experiences performed on fourteen test functions (see Tables 1 and 2). Most of these functions are test functions drawn from the Lipschitz optimization literature (see Hansen and Jaumard ).
The performance of the Modified Piyavskii’s algorithm is measured in terms of NC, the number of function evaluations. The number of function evaluations NC is compared with (nref), the number of function evaluations required by the reference sequential algorithm. We observe that NC is on the average only 1.35 larger than (nref). More precisely, we have the following estimation
For the first three test functions, we observe that the influence of the parameter M is not very important, since the number of function evaluations increase appreciably for a same precision ε.