Int. J. Communications, Network and System Sciences, 2011, 4, 549-561
doi:10.4236/ijcns.2011.49066 Published Online September 2011 (http://www.SciRP.org/journal/ijcns)
Copyright © 2011 SciRes. IJCNS
Parallel Minimax Searching Algorithm for Extremum of
Unimodal Unbounded Function
Boris S. Verkhovsky
Department of C om put er Science, New Jersey Institute of Technology, Newark, USA
E-mail: verb73@gmail.com
Received June 27, 20 1 1; revised August 1, 2011; accepted Augu st 25, 2011
Abstract
In this paper we consider a parallel algorithm th at detects the maximizer of unimo dal function ()
f
x comput-
able at every point on unbounded interval (0, )
. The algorithm consists of two modes: scanning and detecting.
Search diagrams are introduced as a way to describe parallel searching algorithms on unbounded intervals.
Dynamic programming equations, combined with a series of liner programming problems, describe relations
between results for every pair of successive evaluations of function f in parallel. Properties of optimal search
strategies are derived from these equations. The worst-case complexity analysis shows that, if the maximizer
is located on a priori unknown interval (n
1, ]
n
, then it can be detected after
parallel evaluations of
/2 1(
p


()2log1) 1
p
cn n



( )
f
x, where p is the number of processors.
Keywords: Adversarial Minimax Analysis, Design Parameters, Dynamic Programming, Function Evaluation,
Optimal Algorithm, Parallel Algorithm, System Design, Statistical Experiments, Time
Complexity, Unbounded Search, Unimodal Function
1. Introduction and Problem Statement
Design of modern systems like planes, submarines, cars,
aircraft carriers, drugs, space crafts, communication net-
works etc. is an expensive and time consuming process.
In many cases the designers must run and repeat expen-
sive experiments, while each run is a lasting process. The
goals of these experiments can be either to maximize
performance parameters (speed, carried load, degree of
survivability, healing effect, reliability etc.) or to mini-
mize fuel consumption, undesirable side effects in drugs,
system’s cost etc.
Performance parameters that are to be maximized (or
minimized) are functions of other design parameters
(wing span in aircrafts, aerodynamic characteristics of
cars, planes, helicopters, raising cars, antennas or hy-
drodynamic profiles of submarines, ships etc.). At the
best, these functions are computable if CAD is applicable.
Otherwise, multitude of wind tunnel experiments is re-
quired for aero- or hydrodynamic evaluations. Analo-
gously, numerous statistical experiments on animals and
later on different groups of people are necessary if a new
drug is a subject of design. Such experiments may last
months or even years. For instance, in the USA devel-
opment of a new drug takes in average ten-twelve years.
In view of all factors listed above, it is natural to mini-
mize the number of experiments in attempts to design a
system with the best parameters. In most of the cases, we
can estimate an upper bound on the design parameter
under consideration ; yet this is not always the case espe-
cially if the cost of experiments is increasing or even can
be fatal in design of new drugs.
The unbounded search problem, as d escribed in [1], is
a search for a key in a sorted unbounded sequence. The
goal of the optimal unbounded search is to find this key
for a minimal number of comparisons in the worst case.
The authors describe an infinite series of sequential algo-
rithms (i.e., using a single processor), where each algo-
rithm is more a ccurate, than the p revious algorith m. In [2]
the unbounded search problem is interpreted as the fol-
lowing two-player game. Player
A
chooses an ar b itrar y
positive integer . Player may ask whether the in-
teger nB
x
is less than . The “cost” of the searching
algorithm is the number of guesses that must use in
order to determine . The goal of the player is to
use a minimal number of these guesses in the worst case.
nB
nB
B. VERKHOVSKY
550
ue o
n [4].
This number is a function . The author of that pa-
per provides lower and upper bounds on the valf
()cn . More results on nearly-optimal algorithms are
provided in [3 ], and then these resu lts are generalized for
a transfinite case i
()cn
As pointed out in [5], the problem formulated in [2] is
equivalent to the search for a maximizer of an unimodal
function ()
f
x, where . The goal of the
search is to minimize the number of required evaluations
of a function
(0,x)
()
f
x (probes, for short) in the worst case,
if the maximizer is located on a priori unspecified inter-
val , and where is a positive integer number.
In [5], the authors consider the unbounded discrete uni-
modal sequential search for a maximizer. Employing an
elaborate apparatus of Kraft’s inequality, [5], inverse
Fibonacci Ackermann’s function and, finally, a repeated
diagonalization, they construct a series of algorithms that
eventually approach lower bounds on the function .
(1n,]nn
()cn
The general theory of optimal algorithms is provided
in [6,7]. The problems where
f
is a unimodal function,
defined on a finite interval, are analyzed by many authors,
and the optimal algorithms are provided and analyzed in
[8-12]. Optimal parallel algorithms, searching for a maxi-
mum of a unimodal function on a finite interval, are dis-
cussed in [13-15]. The case where
f
is a bimodal
function is discussed and analyzed in [16-18]. The case
where additional information is available is studied in
[19,20]. The optimal search algorithm for the maximum
of a unimodal function on a finite interval is generalized
for a case of multi-extremal function in [21-23]. In all
these papers, the optimal algorithms are based on the
mere fact that a maximizer (minimizer) is located on a
priori known finite interval (called the interval of
uncertainty). The algorithms employ a procedure that
shortens the interval of uncertainty after every prob e.
Complexities of related problems are functions of the
size of interval K. Search algorithms for two-dimensional
and multidimensional unimodal functions are considered
respectively in [24,25].
K
K
In this paper we consider a parallel algor ithm find ing a
maximizer (or a minimizer) of a function
f
defined on
an unbounded interval
of R and computable at every
point
x
I
, ). Without loss of generality, we assume that
. It is easy to see that an algorithm, that detects
a maximizer, cannot employ the same or analogous
strategies as in the finite case, since the interval of un-
certainty is infinite.
(0I
Definition 1.1. A unimodal function has th e following
properties: 1) there exists a positive number
s
, such that
12
()()()
f
xfxfs for all 12
0
x
xs and
12
() () ()
f
sfx fx for all 12 ; 2) sx x ()
f
x
is not constant on any subinterval of
. The point
s
is
called a maximizer of function ()
f
x. It is not required
that
f
be a smooth or even a continuous function.
The goal of this paper is to describe and analyze an
algorithm that 1) detects an interv al of length t (t-interval,
for short) within which a maximizer of
f
is located; 2)
uses a minimal number of parallel probes (p-probes, for
short) in the worst case for the t-detection.
Definition 1.2. An algorithm is called balanced if it
requires an equal number of probes for both stages
(scanning and detection).
Definition 1.3. The algorithm that is described in this
paper is minimax (optimal) in the following sense. Let F
be a set of all unimodal functions
f
F defined on I;
t be a set of all possible strategies t
S
s
detecting a
t-interval that contains a maximizer s of function f; and
let be the number of p-probes that are re-
quired for detection of the maximizer on t-interval using
strategy t
(,)
t
sNf
s
. Then a minimax strategy t
s
detects the
maximizer for a minimal number of p-probes in the
worst case of the unbound ed function f, [17].
The Definition 1.3 implies that
min max,
tt
sS fF
tt
NsN fs
(1.1)
Remark 1.1. Although s is a priori unknown to the
algorithm designer, it is assumed in this paper that its
value is fixed. Otherwise, the algorithm designer will not
be able to provide any algorithm for t-detection of s. In-
deed, the adversary can generate a function f that is in-
creasing on any finite subinterval .
(0,)vI
2. Choice of Next Evaluation Point
2.1. Sequential Search: Single-Processor Case
Proposition 2.1. Let us consider two arbitrary points, L
and R, that satisfy inequalities 0. If LR
() ()fL fR
, then maximizer s is greater than L; if
then maximizer s is smaller than R; if ()
() ()fLfR
()fLfR
, then s is greater than L and smaller than R,
i.e., (,)
s
LR
, [10,26].
Proof follows immediately from unimodality of the
function f.
If , then a maximizer s is detected on a
finite interval, i.e.,
() ()fL fR(0, )
s
R
. Therefore for t-detection
of the maximizer s we can employ Kiefer’s algorithm for
sequential search [10,26] or the algo rithm [25] for paral-
lel search.
Suppose that f is evaluated at two points :
i
qL
and
:
j
qR
, where 0LR
 and let () ()
f
LfR
.
Two strategies are possible in this case: to evaluate f ei-
ther at a point (, )
M
LR
or at a point
M
R; {there
is no reason to evaluate f at , since the Proposition
2.1 guarantees that if q
() L
()
f
LfR, then maximizer s is
greater than L}. Let assume that function f is increasing
Copyright © 2011 SciRes. IJCNS
B. VERKHOVSKY
551
on interval (, )LR
.
Therefore, s is either on finite interval (,RR)
or
on infinite interval . Keeping in mind that we
consider the worst case complexity, it is reasonable to
evaluate f at a point
Rs
M
R that is on the infinite inter-
val .
,R
2.2. Multiprocessor Case
Let us consider the same function ()
f
x as in the single
processor case. Let us simultaneously evaluate it at
points 1,,
p
M
M, where p is the number of available
processors.
Consider four scenarios:
A: All or a part of the points 1,,
p
M
M are inside
interval ;
(, )LR
1
B: All points ,,
p
M
M
,,
1
are outside interval ,
i.e., every point (, )LR
p
M
M
[,] is larger than R;
C:
s
RR

[,]
;
D:
s
RR
 .
2.3. Possible Outputs in the Worst Case
For both AC and AD scenarios ()
f
x and
can be
selected in such a way that for all , for which
i
1ip
(, )
M
LM, the inequality i
() ()
f
MfR holds.
Hence, taking into account that we are dealing with the
worst case, all evaluations must be done outside interval
if
(, )LR ()()
f
Lf
qq
R.
3. Optimal Unbounded Search Algorithm as
a Two-Player Game with Referee
3.1. Sequential Search: (p = 1)
Let us consider two players A and B and a referee. Their
game consists of two stages.
At the beginning, player A selects a value 0s and
informs the referee about it. Player B selects a value
t > 0 and informs player A and the referee about his
choice.
At the first stage, B sequentially selects positive and
distinct points 12
,,,
i
q. The referee terminates the
first stage if there are points
j
q and k
q such that
j
s
q and k
s
q.
The second stage begins from state 11 1
(,, )uvw where
:
11
j
uq; :
1
j
vq; :
11
j
wq.
At the second stage, B selects points and A selects in-
tervals. Let the game be in state (,,)
j
jj
uvw of the sec-
ond stage. This stage terminates if jj
wu t. Other-
wise B selects a point
j
j
x
v, such that
j
jj
uxw
.
Then A eliminates the leftmost or the rightmost subin-
terval, [16]. The goal of player B is to minimize the
number of points required to terminate the game. The
goal of player A is to maximize the number of these
points. The adversarial approach for interpretation of the
optimal search algorithms is also considered in [2,27].
Remark 3.1. It is easy to see that B is an algorithm
designer and A is a user that selects function f and re-
quired accuracy t.
3.2. Multiple-Processor Search: (p 2)
An optimal parallel search algorithm with p processors
has an analogous interpretation. In this case, at the first
phase of the game, player B on his move selects p dis-
tinct positive points. The referee terminates the first
phase if there are at least two points to the right from s.
At the beginning of the second phase, the player A se-
lects any two adjacent subintervals and eliminates all
other subintervals. In general, at the second phase, B
selects points and A selects intervals. More specifically,
player B on his move selects p distinct points on the in-
terval and player A on her move selects any two adjacent
subintervals and eliminates all other subintervals. The
goal of the game is the same as in the single-processor
case. It is obvious that on the first stage of the game
player B must select all points in an increasing order
from one p-probe to another.
Remark 3.2. At first, we will describe the optimal
unbounded searching algorithm with one processing
element, (PE, for short). Subsequently, we will describe
and discuss the parallel minimax unbounded search with
p PEs. As it will be demonstrated, the case where p is an
even integer is simpler than the case where p is odd.
4. Structure of Unbounded Sequential
Search
Consider a finite interval K, i.e., its length K
. In
the case, if it is known a priori that
s
K, then we can
t-detect maximizer s for at most
log 1
K
toKt
probes, where
152
 , [16,17]. (4.1)
However, the situation is more complicated if f is de-
fined on unbounded interval In this case we
divide entire interval I into an infinite set of finite subin-
tervals of uncertainty11
(0, ).I
0, ]:(
I
q
; 212
:(, ]
I
qq; ;
1
:(,]
kkk
I
qq
; where
1k
k
I
I
. (4.2)
In this paper it is demonstrated that a minimax search
algorithm consists of two major modes: a scanning (ex-
panding) mode and a detecting (contracting) mode. Let
us assume for simplicity of notations that for all integer k
Copyright © 2011 SciRes. IJCNS
552
)
B. VERKHOVSKY
:(
k
ffqkk
Definition 4.1. A search algorithm is in the scanning
mode while for all function f satisfies
inequalities .
, i.e., is a result of the k-th probe. f
12 k
qq q
f
12 k
In the scanning mode we probe intervals
until this mode terminates.
ff
12
,,,,
k
II I
Definition 4.2. We say that a search algorithm is in
l-th state 1 of the scanning mode if l-th in-
terval of uncertainty 1]
lll
{,}
lll
prr
(,
I
qq
is to be eliminated
and is the next evaluation point. Here
1l
q
11
:
ll
rq
l
qI

l
; 11ll l
rq q
ffl
Remark 4.1. If 1ll, then the search switches
into the detecting mode with initial state 1, [16].
However, if 1ll
, then the search moves to the next
1l state of the scanning mode. As a result, the interval
of uncertainty 1l
I

.
{,}
ll
rr
ff
p
I
is eliminated (since1l
s
I
) and
the inter2l
val
I
is the next to be examined. Since at
any state the search can switch into the detecting mode,
the dilemma is whether to select interv2l
al
I
as small
as possible (in preparation for this switch) or, if the
search continues to stay in the scanning mode, to select
2l
I
as large as possible. The dilemma indicates that
there must be an optimalic choe of 2l
I
.
In the detecting mode, we can use an optimal strategy,
[10,16,26], which locates s on t-interval. To design a
minimax search algorithm, we must select all
12 1k in such a way, that the total number
of required probes on bo th modes is minimal in the worst
case.
qqq

Definition 4.3. We say that a set of points
is a detecting triplet if ( ,,)
ijk
qqq
ij
fff
kk
, where . (4.3)
ij
qqq
If is a detecting triplet and f is a unimodal
function, then maximizer s satisf ies inequality k
(, , )
ijk
qqq
i
qsq
,
[17].
Definition 4.4. In the following consideration,
means the minimal total number of required probes for
both modes in order to detect maximizer s in the worst
case if
(,)Ubc
(, ]
s
bc.
In the following discussion, we assume that t = 1,
unless it is specified otherwise.
Proposition 4.1. If f is an unimodal function and
, but this is a priori unknown, then
for all s is detectable after probes in
the worst case, i.e.,
1
(1,
mm
sF F

3m1]
2( 2)m

11,1 22
mm
UF Fm
 (4.4)
where probes are used in the scanning mode and
probes in the detecting mode. Here
1m
3mm
F
is m-th
Fibonacci number: ;
12
1FF 21mm m
F
F
F
for
m > 2.
Proof {by induction}: We will demonstrate that in the
scanning mode the optimal evaluation points
12 1
,,, ,
oooo
k
qqq q
k
must satisfy these properties: 1:1,
o
q
1:
oo
kk k
qq F
, for all , i.e., 2k
k
2
1
:
o
ki
k
i
qFF
2

. (4.5)
Remark 4.3. First, we demonstrate how to find an ap-
proximation a of s that satisfies inequality
s
at
, i.e.,
ats at
. Then we will show how to adjust the
algorithm, if (1,]
s
nn
(2,
F
.
1). Let 34
2](0,1sF
 ]. Consider 1
1q
22q
. Then 12
. Hence, Proposition 4.1 implies
that ff
2
, )(0
s
q
. Thus, 22aq
implies that (0,1]a
;
therefore 1
s
at

). It is easy to verify that, if
(0,1s
and 0
, then, in the worst case, two
probes are not sufficient for detection of s on t-interval.
Indeed, the adversary can select such s and f that satisfy
the following inequalities: 12
1 and 12
qsq ff
.
Hence, in that case, s is not t-detectable on inter-
val after two probes. (0,1]
2) Let now 1
(2,2
mm
sF F
)
 and . Then s
can be t-detected for mk
1
s(2, 2)2
mm
FF m
1

probes. If , then k
probes were used in the scanning mode and
mk3k
probes in the detecting mode.
3) Let 12
(1,
kk
sF F

1]

1, 2,,ik
, but it is a priori un-
known. Let for all 1, 2k

:1
ii
qF
the probes are
taken in the points 2
.
In this case the following inequalities hold
12 1kk k2
f
ffff
 .
Since 12kk k
f
ff
, then 12
is a de-
tecting triplet, and, as a result, the search is in the detect-
ing state 12
{, . Then from [8,10,26], using the
optimal search algorithm, we can detect s with accuracy
t = 1 for additional k evaluations of f. Hence the minima l
total number of required probes for both modes is equal
(, ,)
kk k
qqq

}
kk
FF

12
(2, 2)
k
F
 2
tk
UF k1
 . Q.E.T.
5. Optimal Balanced Sequential Search
5.1. The Algorithm
Assign a required accuracy t; {the scanning mode of the
algorithm begins};
:Lt
; ; :2Rt
while () ()
f
LfR
do begin
:temp L
; :LR
; ; (5.1) :RRtempt 
{(5.1) generates a sequen ce of probing states ,
12
{, },FF
1
{,
kk
}
F
F
}; end;
{the maximizer s is detected: (,)
s
temp R; the algo-
rithm is in the detecting state
{Ltemp
,RL
};
Copyright © 2011 SciRes. IJCNS
B. VERKHOVSKY
553
{the following steps describe the optimal detecting algo-
rithm-see [16]};
assign
:;
A
temp (5.2) :;BR:RABL;
repeat
if () ()
f
LfR then


:;:2 ;:
:; ,;
tempL LRB BR
RtempsAR


;
;
2
}
(5.3)
else


:;:2;:
:; ,;
tempR RLA AL
LtempsLB

 (5.4)
{(5.3) and/or (5.4) generate a sequence of detecting
states 11
{,},,{,
kk
F
FFF
};
until ()2;BA t
assign :( )2aAB ; {a is the approximation of the
maximi zer:
s
at}; stop.
The algorithm described above is called V-algorithm.
5.2. Optimality of Sequential Search
Proposition 5.1. The number of required probes for
t-detection of a maximizer described in Proposition 3.1 is
minimal in the worst case.
Proof. The algorithm consists of the scanning and de-
tecting modes. In the scanning mode (SM) the search is
sequentially in the probing states where
12
,,,
m
pp p

112 1
:,,,: ,
mmm
pFFp FF

.
(5.5)
On the other hand, in the detecting mode (DM) the
algorithm is in the detecting states , ,
1
1
{,}
mm
FF
2
{, }
F
F
{,
. It is known that all detecting states 1,
, 12
{,FF}
mm
}
F
F are optimal (there are no other strategies
that can t-detect s for a smaller number of probes). At the
same time the entire SM is a mirror image of the DM.
Indeed, from the beginning to the end of the SM the
search goes from scanning state 12
{, }
F
F to scanning
state 1, while in the DM the search goes from
detecting state 1
{, to detecting state 12
{,
{,
mm
FF}}
mm
FF
}
F
F.
Thus, both modes (scanning and detecting) are optimal;
therefore, the entire algorithm is optimal.
6. Complexity of Minimax Sequential Search
Let us compare the optimal search algorithms for two
cases:
1) Maximizer , but this is a priori un-
known; here b is a positive integer;
(, 1]
m
sbF
2) It is known a priori that (0,)
m
s
F
(,
. Let
be the minimal total number of required probes for
t-detection of s in the worst case if
(,)Bbc
)
s
bc.
From Proposition 4.1, if and ,
then 11
m
bF
 2m
11,1 22
mm
UF Fm
 . (6.1)
However, if 0b
, then the following inequality
holds:
0,1 22
m
UF m
. (6.2)
From [11] it follows that, if , then
4m
(0,)2
m
BFm
, otherwise
0, 0.
m
BF (6.3)
(6.1) and (6.3) imply that for all
4m

0,12 0,22
mm
UFBF m
. (6.4)
In general, if 1
(,] (1,1]
mm
sab FF
, b ut this is a
priori unknown, then

 
,2log 51logUabb obb



 (6.5)
where
is defined in (4.1).
Equality (6.5) follo ws from the fact that
5
mm
m
F

 , [13,14]. (6.6)
where
152
1
 m
, therefore lim 0
if
.
m
Thus,

51
m
m
Fo

. (6.7)
If 11
mm
FPF
, then
11,22
m
UFP m
. (6.8)
The complexities (6.1) and (6.5) can be further re-
duced if any prior information is available, [6,17], or if a
searching algorithm is based on a randomized approach,
[19,30].
Proposition 6.1. Let be the minimal number in
the worst case of the required probes to detect s on a pri-
ori unknown interval
()cn
1,
s
nn. If
12
m
FnF
1
m
, (6.9)
then

22cn m
(6.10)
Proof. First of all, the relations (6.1), (6.2), (6.5) and
(6.8) are based in the previously made assumption that
t := 1. From this assumption it follows that maximizer is
detectable on an interval of length two, .
In order to find the complexity of the algorithm if
{1 1asa  }
1,
s
nn , the scale of the search must be decreased
twice, i.e., we must select t := 1/2. Two cases must be
considered:
Copyright © 2011 SciRes. IJCNS
554 B. VERKHOVSKY
Case 1: If

111
m
F
tn
 and ; (6.11)
1
m
nF
t
then (6.10) is implied by (6.5) if t :=1/2.
Case 2: If
 
11
mm
F
tn Ft
 and (6.12)

11
m
Ftn
1,
i.e., the case where
11
m
F
t
is in the middle of the
interval
1,nnm.
It occurs if In this case the left half of
the interval mod3 0.
1,nn is out of interval

11,
m
F
t

1
m
F
t
, i.e.,

1 ,1
mm
tF t
 
1
2F
1,1/nn
and, as a result, fewer probes are required for t-detection
of s. However, in the worst case, the maximizer may be
on the right half of interval
1,nn, hen ce
for both cases. For illustration see Ta-
ble 6.1 below.
() 2(cn m2)
Thus, (6.8) implies that



2log2514.cnn on





(6.13)
7. Estimated Interval of Uncertainty
In many applications, an upper bound value Q on maxi-
mizer s can be estimated from a feasibility study. Let
5TQ d
.Q Here 451.082.d

Proposition 7.1. If
s
T
, then V-algorithm re-
quires fewer probes than Kiefer’s algorithm, [14]; if
TsT
 , then Kiefer’s algorithm and V-algorithm
require the same number of probes; if , then
Kiefer’s algorithm requires fewer probes than V-algo-
rithm.
TsQ
Proof. Let and . Then in the
worst case :
m
TF1
m
22
:m
QF



11
22
0,11, 1
0,2 2
mm
m
UFUF F
BF m

 
. (7.1)
It is easy to check that the proof follows from (6.7)
and from the fact that


2
22 51
mm
F
Fo

m
. (7.2)
Preliminary results on analysis of the optimal algo-
rithm searching for the maximum of a multimodal func-
tion on the infinite interval are provided in [28].
Table 6.1. Total number of probes as function of n.
46n 494798n 60,697 98,208n

10cn
30cn

50cn
8. Parallel Search: Basic Properties
If several processors are available, then, as it is indicated
in [29], the algorithm can be executed in a parallel mode.
[13,15] are the earliest papers on a parallel search for a
maximum of a unimodal function of one variable on a
finite interval, that are known to the author of this paper.
Although the optimal search strategies in both papers are
in essence identical, the formulation of the problem is
different. The proof of optimality of the search is more
detailed in [15]. [2] provides an idea of a parallel algo-
rithm searching for a maximum of a unimodal function
on a unbounded interval. This idea is based on an appli-
cation of the Kraft’s inequality formalism, is provided in
[2]. The authors indicate that the approach they used to
construct an infinite series of near-optimal algorithms for
the unbounded search with a single processor can be ex-
panded for a multiprocessor case. However, no details
are provi d ed.
The search algorithm described in this paper is based
on the following properties.
Proposition 8.1: Let us consider p arbitrary points
1,,
p
qq that satisfy inequalities
1
0p
qq
 . (8.1)
If
12 1
p
p
f
ff
f
 , (8.2)
then maximizer s is greater than if
1;
p
q
12 1
p
p
f
ff
 f
, (8.3)
then s is smaller than if
2;q
11 1
,
j
jj p
f
fff f

 (8.4)
then s is greater than 1
j
q
and does not exceed 1
j
q
,
i.e.,
11
,
jj
sqq

. (8.5)
Proof follows immediately from unimodality of func-
tion f.
9. Search on Finite Interval: Principle of
Optimality
Definition 9.1. (,)
p
m
I
uv
{,}.uv
is a minimal in the worst case
interval of uncertainty containing maximizer s that can
be detected after m p-probes if the search starts from the
detecting state
9.1. Properties of
,
p
m
Iuv
1)
p
l
,
p
m,
I
uv l; )
(Effectiveness of p-probes);
I uv if (9.1m
Copyright © 2011 SciRes. IJCNS
B. VERKHOVSKY
Copyright © 2011 SciRes. IJCNS
555
2)
1122
,
p
ml
 
,
p5)
,
pp
mm
I
uv
(Monot
I uv if 12
uu; 12
vv
,
I
cu cvcIuv for every ; (9.5) 0c; (9.2)
3)
onicity of uncertainty); (Homogeneity).
p
mm
 
,,
p
I
uvI vu; (Symmetricity); (9.3)
9.2. Properties of (,)
p
m
I
uv if p = 2
4
)
,
 
,
pq
mm
I
uv I
Efficiency of paral
uv if pq; (9.4)
(leliza tion);Proposition 9.1: Let . Then uv
(9.6)
(9.6) is a functional equation of dynamic programming; it implies the following property. Proofs of this and the follow-

 

 

13
12
22
11 131313
,
2
22
1211 211 22
min max,max,,,max,;
,min
min max,max,,,max,.
mm
yuyv
m
mm
yyu
IuyyyIy uyvy
IuvIy yuyyIuyy yv







 


ing Propositions 9.2-9.5 are provided in the Appendix.
Proposition 9.2: Let uv; then


21
2
21
, if ;
2
,
,; 0 if .
m
m
m
uv
Iv uv
Iuv
I
uzzzu uv



 
(9.7)
.3. Odd Number of Processors
roposition 9.3: If ; and then
9
P21pruv,
   
 
21 1
r1
21
21
1
,, 1;
,,1 , 1;
m
r
mr
m
I
rk vk
uku rkvkkrkvukrk
Iu
vIrkvkukurkvrk krkvuk

 
  

(9.8)
here for )
2
w1/kr


er branch of
2
(1) (rkvkuku rkv
(9.8) and for 0/kr
in the upp
wer branch of
() (1)()rkvkuku rkv in the lo
(9.8).
9.4. Even Number of Processors
roposition 9.4: If
P2pr
and then for all uv,
12kr
the fg dynamprogramming
: ollowinic
equation holds
 
 
21
2
21
1,11, 11
(,) 1,, 1, 01.
r
m
r
mr
m
;
I
rkvkukurkvrkrkvukrk
Iuv Iuvrzzkurkvzuvk
 
 
 

(9.9)
.5. Defining Rules of Optimal Detecting States
efinition 9.2. If there exists a pair of positive numbers
and
9

2121
1
,,
rr
mmmm mmm
IcdIdcrd

 , (9.11)
whic
h means that
D
c and d such that c>d and for all non-negative numbers u
and v
uv
1:;
mm
cd
1:
mm m
dcrd
 ;
p is even, then
(9.12)
Proposition 9.6. if
cd (,)(, ),
pp
mm
I
uvI cd (9.10)
then is an opimal detecting
st
two propositions can be proved by in-
du

22
1
,1,,d
rr mm
IcdIcd rd
{, }cd
finitio
timal detecting state.
Den 9.3. Let {, }
kk
cd be the opt
ate starting from whin be located after k addi-
tional p-probes.
The following
ch s ca
ction:
Proposition 9.5. if p is odd, then
mm
mmmm

which means that

11
:; :1
mm m mmm
ddzc cdrd
1
.

 (9.13)
Remark 9.1. t
m
dcons
in (9.13).
1) and 2) im (9.12) and (9
op states ;
ply defining rules.13) for
timal detecting,}
k
d; here 01
{
k
cz
556 B. VERKHOVSKY
00
:1; :; :; :2.tdzctzrp


(9.14)
Proposition (1) and (2) imdefining rulesply the
optimal detecting states : if p is odd, t
if p is even, then for all
.
(9.16)
Both these rules for p odd and even can be generalized
as: assign
for
hen for all {,}
kk
cd
1

111
:;:;
kk kk
crc c

  (9.15)
k
k
d d
0k

:1crc
 
111
11
:;
11:
k
kkkk
k
kk
dz
d d
rc rdtrz
 

 

:1mod2;p
 (9.17)
then


11
12;
kk
p cd

 

11
:
:mod2.
k
kkk
c
dpcd


(9.18)
The following two examples and Table 9.1
{with t = 1;
00
12; 12; 12zc d} (9.19)
show how steps k
ar
for vf processors
optimal search e changing
arious number o and
k
cd
.
Example 9.1. Let p = 3; the n :22;
rp

1:12122;cr 1:12d;

11
:22125rcd;
2

322
:252crcd 
Example 9.2: Let p = 4; then for
c21
:2dc
14;:5c;
;
32
d
all 1k :12
k
d
; and

1000
:3 312;ccdd

2
21
:33 ;ccdd
1112

3
3222
:33 12;ccdd
10. Search on Infinite Interval wi Even
Number of Processors {p = 2r}
bes
th
10.1. Scanning Mode
Let us select the first p pro11 211
,,,
p
qq q.
and qq
If maximizer
1,1
0, p
sq
t12,1
1
ii

on a fite inte for
ected rval
-probe.
general
he )th detecting state. It
e are required for
alternating
all 2ip , then s will be deni
after the very first p
In, if
1, 1,
,
kpk
sqq
, then s will be detected
on a finite interval after k p-p r o bes.
10.2. Detecting Mode
Suppose the search is in t
eans that at most k (1k-
+ 1 p-prob
s. p-pr
m
t-detection of the maximizer obes will divide the
larger interval into p + 1 sub-intervals
,,,,,,
kk kkk k
dcdc cd. In the k-th detecting state, the
search will be either in {,}
kk
dc state or in the {, }
kk
cd
state. Both of these states are equivalent, i.e., they re-
er of p-probes for the t-detection
and the symmetrical chprobes. Schematical
can be described in the following diagram:
1k
c
quire the same numboice of ly it
11
,;,,,,,,, ,
k kkkkkkkkkkk
d ccdcdcdcddc


(10.1)
Here a semico
1k
dlon separates the “leftover” interval
of the prev
,,,,,,
kk kkk k
dcdc cd
peIt
detecting states descri
in
ious state from
, where t
p =
the new sub-i
he pair
nique.
6 and let the search
ntervals
)
Ibe
(,dc
kk
ated r times.
is important to notice for further application that the
bed above are not undeed,
let us consider the search with
is re-
the detecting state {,8 }
x
x
, where x is an integer vari-
able, and 17.x
Then for any x, the maximizer s is
t-detectable after one p-probe only: divide the left inter-
val into x equal subs using 1
interval
x
probes and
divide the rval into 8
right inte
x
equal intervals using
81
x
probes. Let x = 5. Applying the schematic de-
scription (10.1) of search we h av e
{5, 3}[1,1,1,1,1;1,1,1]{1,1}1x, then
[1;1,1,1,1,1,1,1] {1,1}. In general, for any even
p we consider a detecting state {1, k
. If
onto
1)k
r
{1,7}
divide the right interval
nating lengths equal
2( 1) 1}r
ubintervals with alter-
. Let us
p s
112(
and one. Then from
the diagram (10.1) one can see tha

t
 
11 1
1,1 1
1;21 1,1,21 1,1,
k
kk k
r
rr
 


1 1r

1
1, 2( 1)1
{1,1}1;1,1, ,
k
p



1, 21
,1 1,1
rr

,1, 2
1
1 .
 
} is the
can be


Therefore, the
optimal detecting state starting f
t-detected after k p-pr
detecting state
s.

k
{1, 2(1)1r
rom which s
obe
Table 9.1. Search intervals as functions of number of proc-
essors p.
p 2 3 4
111
,,cdz 1.5; 0.5; 2 2; 0.5; 2.5 2.5; 0.5; 3
222
;;cdz 3.5; 0.5; 4 5; 2; 7 8.5; 0.5; 9
26.5; 0.5; 27
p
333
;;cdz 7.5; 0.5; 8 14; 5; 19
5 6 7
111
,,cdz 3; 0.5; 3.5 3.5; 0.5; 4 4; 0.5; 4.5
222
;;cdz 10.5; 3; 13.5;18; 4; 22
4
5 15.
6
0.5; 16
333
;;cdz 0.5; 10.5; 513 .5; 0.5; 6488; 18; 106
Copyright © 2011 SciRes. IJCNS
B. VERKHOVSKY
557
11. Interomon k
ort), i.e., PE(i) and PE(i + 1) are connected with MEM( i)
. (1
Then for large m(p) and m(1) holds that
ed probes in
parallel and sequential modes are respectiv
-Prcessor ComunicatiNetwor
1) All PEs are connected with
2) Two adjacent PEs share a memory unit (MEM, for
a linear bus;
sh
and can read from it concurrently.
12. Speed-up and Efficiency of
Parallelization
Let ()1 ()mp mp
bnb
 and (1) 1
bnb
 (1)mm
2.1)
 
log1 log1mpumu .

p (12.2)
On the other hand, the numbers of requir
ely equal
21
p
cn mp and
1211cn m. (12.3)
Let’s define the speed-up of parallelization as
 
1pp
s
nc cn. n (12.4)
Then (12.2) and (12.3) imply that
 

log
log 1
pu
up
sn.
If efficiency of parallelization is define
(12.5)
d as

:
pp
en snp, (
th
12.6)
en
 

1log 1
pp
en cn pcnpu



Since for the large number p of processors
logup. (12.7)

21up p


;
therefore for a large n
(see (A.15) and Table A.1 in Appendix) (12.8)
 


log2 1log
p
log
p15 2
s
np
; (12.9)


and finally

log
p
en pp
. (12.10)
13. Acknowledgements
Willard Miranker and
J. Watson Research Ce
nd Henryk Wozniakowski of Columbia University for
] J. L. Bentley and A. C.-C. Yao, “An Almost Optimal
Information Pro-
Vol. 5, No. 1, 1976, pp. 82-87.
I express my appreciation to
Shmuel Winograd of Thomas nter
a
their comments and discussions. I also thank my former
students Swetha Medicherla and Aleksey Koval for sug-
gestions that improved the style of this paper.
14. References
[1 Algorithm for Unbounded Searching,”
cessing Letters,
doi:10.1016/0020-0190(76)90071-5
[2] R. Beigel, “Unbounded Searching Algorithm,” SIAM
Journal of Computing, Vol. 19, No. 3, 1990, pp. 522-537.
doi:10.1137/0219035
[3] E. M. Reingold and X. Shen, “More Nearly-Optimal Al-
gorithms for Unbounded Searching, Part I, the Finite
Case,” SIAM Journal of Computing, Vol. 20, No. 1, 1991,
pp. 156-183. doi:10.1137/0220010
[4] E. M. Reingold and X. Shen, “More Nearly-Optimal Al-
gorithms for Unbounded Searching, Part II, the Transfi-
nite Case,” SIAM Journal of Computing, Vol. 20, No. 1,
1991, pp. 184-208. doi:10.1137/0220011
[5] A. S. Goldstein and E. M. Reingold, “A Fibonacci-Kraft
Inequality and Discrete Unimodal Search,” SIAM Journal
of Computing, Vol. 22, No. 4, 1993, pp. 751-777.
doi:10.1137/0222049
[6] A. S. Nemirovsky and D. B. Yudin, “Problems Complex-
ity and Method Efficiency in Optimization,” W
terscience, New York, iley-In-
1983.
er, “Minimax Optimization
, 1976, pp.
[7] J. F. Traub and H. Wozniakowski, “A General Theory of
Optimal Algorithms,” Academic Press, San Diego, 1980.
[8] J. H. Beamer and D. J. Wild
of Unimodal Function by Variable Block Search,” Man-
agement Science, Vol. 16, 1970, pp. 629-641.
[9] D. Chasan and S. Gal, “On the Optimality of the Expo-
nential Function for Some Minimax Problems,” SIAM
Journal of Applied Mathematics, Vol. 30, No. 2
324-348. doi:10.1137/0130032
[10] J. C. Kiefer, “Sequential Minimax Search for a Maxi-
mum,” Proceedings of American Mathematical Society,
Vol. 4, No. 3, 1953, pp. 502-506.
doi:10.1090/S0002-9939-1953-0055639-3
[11] L. T. Oliver and D. J. Wilde, “Symmetric Sequential
Minimax Search for a Maximum,
Vol. 2, No. 3, 1964, pp. 169-175. Fibonacci Quarterly,
ement Science, Vol. 12, No. 9, 1966, pp.
[12] C. Witzgall, “Fibonacci Search with Arbitrary First
Evaluation,” Fibonacci Quaterly, Vol. 10, No. 2, 1972,
pp. 113-134.
[13] M. Avriel and D. J. Wilde, “Optimal Search for a Maxi-
mum with Sequences of Simultaneous Function Evalua-
tions,” Manag
722-731. doi:10.1287/mnsc.12.9.722
[14] S. Gal and W. L. Miranker, “Optimal Sequential and
Parallel Search for Finding a Root,” Journal of Combi-
natorial Theory, Series A, Vol. 23, No. 1, 1977, pp. 1-14.
doi:10.1016/0097-3165(77)90074-7
[15] R. Karp and W. L. Miranker, “Parallel Minimax Search
for a Maximum,” Journal of Combinatorial Theory, Vol.
4, 1968, pp. 59-90.
Copyright © 2011 SciRes. IJCNS
B. VERKHOVSKY
Copyright © 2011 SciRes. IJCNS
558
No. 2, 1989, pp. 238-250.
[16] B. Verkhovsky, “Optimal Search Algorithm for Extrema
of a Discrete Periodic Bimodal Function,” Journal of
Complexity, Vol. 5,
doi:10.1016/0885-064X(89)90006-X
[17] B. Veroy (Verkhovsky), “Optimal Algorithm for Search
of Extrema of a Bimodal Function,” Journal o
ity, Vol. 2, No. 4, 1986, pp. 323-332. f Complex-
doi:10.1016/0885-064X(86)90010-5
[18] B. Veroy, “Optimal Search Algorithm for a Minimum of
a Discrete Periodic Bimodal Func
Processing Letters, Vol. 29, No. 5, 19tion,” Information
88, pp. 233-239.
doi:10.1016/0020-0190(88)90115-9
[19] S. Gal, “Sequential Minimax Search for a Maximum
When Prior Information Is Available,” SIAM Journal
Applied Mathematics, Vol. 21, No. 4,of
1971, pp. 590-595.
doi:10.1137/0121063
[20] Yu. I. Neymark and R. T. Strongin, “Informative Ap-
proach to a Problem of Search for an Extremum of a
Function,” Technicheska
ya Kibernetika, Vol. 1, 1966, pp
ubert, “A Sequential Method Seeking the Glob
a Global Extre-
. Hoffman and P. Wolfe, “Minimizing a Unimodal
eneralization of Fibonacci Search to
proxima-
.
7-26.
[21] R. Horst and P. M. Pardalos, “Handbook of Global Opti-
mization,” Kluwer Academic Publishers, Norwell, 1994.
[22] B. O. Shal
Maximum of a Function,” SIAM Journal of Numerical
Analysis, Vol. 3, No. 1, 1972, pp. 43-51.
[23] L. N. Timonov, “Search Algorithm for
mum,” Technicheskaya Kibernetika, Vol. 3, 1977, pp. 53-
60.
[24] A. J
Function of Two Integer Variables,” Mathematical pro-
gramming, II. Mathematical Programming Studies, Vol.
25, 1985, pp. 76-87.
[25] A. I. Kuzovkin, “A G
the Multidimensional Case,” Èkonomka i Matematiches-
kie Metody, Vol. 21, No. 4, 1968, pp. 931-940.
[26] J. C. Kiefer, “Optimal Sequential Search and Ap
tion Methods under Minimum Regularity Assumptions,”
SIAM Journal of Applied Mathematics, Vol. 5, 1957, pp.
105-136. doi:10.1137/0105009
[27] S. Gal, “A Discrete Search Game,” SIAM Journal of Ap-
plied Mathematics, Vol. 27, No. 4, 1974, pp. 641-648.
doi:10.1137/0127054
[28] B. Verkhovsky, “Optimal Algorithm for Search of a
g
Maximum of n-Modal Function on Infinite Interval,” In:
D. Du and P. M. Pardalos, Eds., Minimax and Applica-
tions, Academic Publisher, Kluwer, 1997, pp. 245-261.
[29] B. Verkhovsky, “Parallel Minimax Unbounded Searchin
for a Maximum of a Unimodal Function,” Research Re-
port, CIS-95-03, New Jersey Institute of Technology,
Newark, 1995, pp. 1-24.
B. VERKHOVSKY
559
Appendix
A1. Complexity Analysis
A1.1. Basic Parameters
k
a interval added before k-th p-probe is computed;
k
g
smaller interval on the k-th scanning state of the
search;
k larger interval on the k-th scanning state of the
search;
h
k interval of uncertainty eliminated from the search
after the k-th p-probe;
w
k total interval added before the k-th p-probe is per-
formed;
t
k total interval eliminated from the search as a result
of k p-pr o bes .
b
A1.2. Basic Relations: {odd p}


2
22 1
;
:;
kkkk
kkk
kkk
aphpgrhrg
rg hg
ahh
 
 
 


k
(A.1)
k and k
b satisfy the following recursive relations for
all
w3:k
1
; :;
kkkkkkk
wahgwhh
 (A.2)

11
:
kk
thhr
;
;
1.
k
z
(A.3)
11
; :.
kk kkkk
bb wbth

  (A.4)
A1.3. Basic Relations: {even p}


1
1
:; :1; :2
:1;
k
kk
k
k
gzh rzwr
wrr z

  (A.5)

1
11
:1
i
kk
ki
ii
bwrrzr

 
 (A.6)
Let us consider for all sequences
1k, ,
kkk
hv
with the following defining rules:
12
:;
kkkkk k
hvzhrh h


;
(A.7)
where
10 10
:0; :1; :1; :1 and 01vv z

 . (A.8)
(A.7) implies that

12 12
:; :
kkkkkk
vrv vr

 
 
.
2
(A.9)
It is easy to demonstrate by induction that for all
k
1 .
k
krv
 (A.10)
Therefore
2.
kk k
hvrvz
 (A.11)
If p = 1, then k
vF
k
, where all k
F
are the Fibo-
nacci numbers.
Thus, all can be computed using the following
formula: k
v
;
k
k
vuw


k
(A.12)
where u and w are roots of the equation
210xrx
: (A.13)
and
and
satisfy the equations:
0
1; .vuwv
1
r

  (A.14)
From (A.13)





42 1
42; 12
ur rrrp
wr rrr p
 
 ;
(A.15)
{see Table A.1 for values of u(p)}.
For every p 1212
x
1; ; xuxwx. Indeed , from
Viete theorem
.wru
(A.16)
Since , then
ur1
0
w
. From (A.15) and (A.16)

.
k
k
vuou
 (A.17)
From (A.14)

42 42rrrrrrr



4.
(A.18)
Therefore, lim()1 2;
pp

lim( )0;
pp
 i.e.,
 
lim11.
pupr
  (A.19)
The latter limit in (A.19) means that for a large num-
ber of processors
32up p.
Examples A1: Table A.1 shows relationship between
odd p and u(p).
A2. Maximal Intervals Analyzed after m
Parallel Probes
Proposition A.1: If
1, 1
mm
sv v
1
, then m p-probes
are required in the worst case to detect the maximizer on
a final interval.
Proof is implied by (A.14) and (A.17).
Proposition A.2: If p is odd, then
Table A.1. u(p) as function of odd number of processors p.
p p = 1 p = 5 p = 9
u(p)
1521.618
321 2 3.791

54525.854
Copyright © 2011 SciRes. IJCNS
B. VERKHOVSKY
Copyright © 2011 SciRes. IJCNS
560




()
()
2log 11
2log 11;
pupm
up
cnv p
np







(A.20) () 21cn m
. Then the proof follows from (A.14) and
(A.17) respectively.
A3. Proof of Proposition A 9.1
and, if p is even, then



11
2log1 12log1 1
prm r
cn vn



 .
(A.21)
Proof: There are only three ways to decompose the in-
tervals u and v using two evaluations (semicolons indi-
cate the previous points that separate u and v):
Proof: If
1
(1,)1, 1
mm
sn nvv
 
, then
1
)

1234232 3
,;,,;,,uvyy yyuy yvyy; where 1;yu
234
yyyv
; (A.21)
2)

12341 13 3
,,;,,;,uvyyyyyu yyv y; where 12 ;yyu
(A.22)
34;yyv
3
)

1234121 2
,,,; ,,;uvyyyyyyu yyv. where 123 ;yyyu
 (A.23)
4.yv
The following recursive equation is derived from the decomposition (A.21)-(A.23):





 
23
13
12
222
12 1231323
,
2222
111 113133
,
222
11212 12112
,
min max,,,,,;
,minminmax,,,,,;
min max,,,,,.
mm m
yy
mmmm
yy
mm m
yy
IuyIyyIyvyy
IuvI yuyIuyyI yvy
I
yyIyu yyIu yyv
 

 







 

(A.24)
Taking into account that 2223
uy vyy y
 and
that 22
we can eliminate the second and
the third terms in the upper branch of the functional
Equation (A.24).
uy vvy such a way that every two adjacent intervals have equal
sums. It implies that the alternating intervals must have
equal lengths: 13;yyw
24
.yyz
yu
y 
Hence (A.26) implies that
112
;yw
2.yv
On the other hand the first term of the upper branch in
(A.24) can itself be eliminated since

21
22
12111
min,min,
mm
yv yu
I
uyIuy y


. (A.25)
In addition,


13
211 13 3
00
min,min ,
m
yu yv
I
yu yyv y
 
 .
Hence Equation (A.24) is reduced to Equation (9.1).
Q.E.D.
A4. Proof of Proposition A9.2


21
2
21
, if ;
2
,
,; 0 if .
m
m
m
uv
Iv uv
Iuv
I
uzzzu uv




(A.26)
Proof: Since, in the worst case, the adversary can se-
lect two adjacent intervals with the largest sum, from
optimality point of view, one must select the intervals in
Thus, 1()yuv2
if . uv
However, if uv
, then
13;zyy
13
uy vy
 .uz 
Let us consider for every k = 1, , p + 1 a pair of
equations:
1;
k
i
iyu
(A.27)
2
1;
p
i
ik yv

where for all i = 2, , p + 2 .
0
i
y
A5. Proof of Proposition A9.3
111
11,
11
,minmax min,.
ii
pp
mm
kp yy
ip
IuvI yy

  
ii
pp
(A.28)
Proof: There are p + 1 ways to represent the intervals
u and v as sums in (A.27). The following dynamic pro-
gramming equation describes recursive relations between
the detecting states

 

21121231 111112
111,,
,minminmax,,,,,,, ,,,,
p
pppppp
mmmmkkmkkm
kp yy
IuvIyyIyy IyyIyyIyy
 


A6. Proof of Proposition 9.4
where all control variables 12
must satisfy
constraints (A.27). Considering the worst case, a user
can select such a function f, that the algorithm must se-
lect a pair of adjacent intervals with the largest sum.
,,
p
yy
If 21pr
; and , then uv
B. VERKHOVSKY
Copyright © 2011 SciRes. IJCNS
561
   
 
21
1
21
21
1
1,, 1
,,1, 1
r
m
r
mr
m
;
;
I
rkvkuku rkvkkrkvukrk
IuvIrkvkukurkvrkkrkvuk

 
  

(A.29)
Proof: Since an algorithm designer’s goal is for a
given number of p-probes to maximize an interval on
which s can be located within a unit interval, it is obvi-
ous to select all the intervals in such a way that any two
adjacent intervals would have the same sum. This search
strategy means that intervals 1231 2
,,,,,,
pp p
yyyy yy
13 ;yyw 24;yy
must have alternate values: z
, i.e., in general, for all 12ip
21 ;
i
yw
2i.yz
A7. Proof of Proposition A9.5
If 2pr
and , then for all uv12kr
the
following dynamic programming equations holds:
 
 
 
21
2
21
1,11, 11
,1,, 1, 01.
r
m
r
mr
m
;
I
rkvkuku rkvrkrkvukrk
Iuv Iuvrzzkurkvzuvk
 
 
 

(9.9)
Proof is analogous to the proof of the Theorem 9.4.