Applied Mathematics, 2011, 2, 1372-1377
doi:10.4236/am.2011.211193 Published Online November 2011 (http://www.SciRP.org/journal/am)
Copyright © 2011 SciRes. AM
A Nonmonotone Filter Method for Minimax Problems*
Qi Zhao, Nan Guo
Department of Basi c , Jiangsu University of Science and Technology, Zhangjiagang, China
Department of Basi c , Nanjing Institute of Tech nology, Nanjing, China
E-mail: johnzqzq@163.com
Received September 18, 2011; revised October 21, 2011; accepted October 28, 2011
Abstract
In this paper, we propose a modified trust-region filter method algorithm for Minimax problems, which
based on the framework of SQP-filter method and associated with the technique of nonmonotone method.
We use the SQP subproblem to acquire an attempt step, and use the filter to weigh the effect of the attempt
step so as to avoid using penalty function. The algorithm uses the Lagrange function as a merit function and
the nonmonotone filter to improve the effect of the algorithm. Under some mild conditions, we prove the
global convergence.
Keywords: Minimax Problem, Nonmonotone, Global Convergence, Filter Methods
1. Introduction
Consider the following Minimax problem:
1
min max()
i
xR im
f
x
 (1)
where (): n
i
f
xR R is a twice continuously differen-
tiable function.
The problem (1) can be transformed into the following
problem below: min
.()
i
t
stf xt
0
(2)
The Minimax problem is one of the most important
non-differentiable optimization problems. It does not
only have broader applications in engineering design,
electronic microcircuits programming, game theory and
so on, but also has very close relationship with nonlinear
equations, muti-object programming, nonlinear progra-
mmming, etc. There are some methods e.g., line search
method SQP method, trust region method and the active-
set method, for solving Minimax problems. C. Chara-
lambous and A.R. Conn [1] proposed the line search
method. A. Vardi [2] presented the trust region method
with the active-set methods. There are many other effec-
tive algorithms, see Z. B. Zhu [3], L. Gao [4], J. L. Zhou
[5] ,Y. Xue [6].
Recently, the filter method for nonlinear programming
has broader applications and good numerical effects, see
[7-12]. The major filter methods are of two kinds: line
search and trust-region methods.
R. Fletcher proposed the global convergent SQP-filter
trust-region method [9], based on this idea, Huang [13]
proposed a filter method for Minimax problems. In [14],
Ulbrich S. used the Lagrange function to replace the fun-
ction and gave the local superlinear convergence proof of
the SQP-filter trust-region method.
The nonmonotone technique can improve the effect of
the algorithm, relax the accept criteria of the attempt step.
Recently, Su [15] and Shen [16] presented the idea of
using nonmonotone filter methods for nonlinear pro-
gramming. Motivated by their idea, we present a modi-
fied filter-method for Minimax problems. The algorithm
uses the Lagrangian function instead of the function it-
self as a merit function, and combines it with a non-
monotone filter technique to improve the effect of the
algorithm.
Consider the SQP subproblem of problem (2):
1
min( )2
()
.() 1
tT
kk
i
i
qs ssHs
fx
s
tfxs t
s


 


 
(3)
We use the following notations:
1
(,), {1,2,,},
xt n
s
ssR Im
(1,1, ,1)T
e
12
(,)(), ()( (),(),())
T
m
hxtf xtefxfxfxfx 
(1)(1)
knn
HR

is a symmetric matrix, and it is the
approximate Hessian matrix of the subproblem (3).
*This work was supported by Chinese NSF Grant 10201026.
Q. ZHAO ET AL.1373
Remark: k
H
is updated by the Powell’s safeguard
BFGS update form ula.
This paper is organized as follows. The new algorithm
is described in Section 2. Basic assumptions and some
important lemmas are given in Section 3. The analysis of
the global convergence is given in Sections 4 and 5.
2. Algorithm
Now we introduce some definitions about the filter used
in this paper.
Definition 1 : [14]
Lagrange f unction:
(,, )(,),
T
lxtt hxt


Constrain violation functio n:

2
2
2
(, ,)(,)(,)
T
thxt hxt
 

For simplicity, we just use the following notations:
(,, ),(,, ),
x
tlxtl
 

(,,),(,,)
j
jjjjjjj
x
tlxtl
 

Definition 2 :
A pair (,)
kk
l
obtained on iteration is said to do-
minate another pair k
(,)
ll
l
if and only if
and
kl k
ll
l
 
A filter set is a list of pairs (,)
j
j
l
such that no pair
dominates any other. We denote the set by for each
iteration k. k
Similar to the definition in Fletcher. and Leyffer, S. [9], a
point ((, ),)
x
ts
can be accept to (,)
kkk
l
if
,
j

or ,
j
ll
 (,) (,)
j
jkkk
ll

Here we use the nonmonotone filter idea in [14] a
point ((, ),)
x
ts
can be accept to (,)
kkk
l
if
0()1
max kr
rmk

 
or
()1
,
0
max, mk
kkr
r
ll
 


kr
l
)
(4)
where (,)(,
kr krkk k
ll

, 01
 

1
(0) 0,0() min(1)1, mmkmkM
()1
,
01,
mk
kr
r
,,
(0,1),0, 1
krkrM


1
Update of the fi lt er set
If (,)
kk
l
is added to the filter, then the new filter set
is updated as follows:

1:( ,)(,),min,0
 
 
kkk jjkjkjk
ll ll
Definition 3 : [14 ]
() (),
kk
predsq s

()1
,
0
() max,
,,()
mk
l
kkkr
r
xt
kk
kr
k
rared sl
lxstss
l
 
(),, (),
lxt
kkkk k
aredsll xstss
 
where ()
k
s
is the Lagrange multiplier of the subprob-
lem (3).
In order to improve both the feasibility and optimality
if
1/2
():(), 0
T
kkkkk
pred spred sh


  (5)
then we require
() ()
l
k
rared spred s
k
(6)
and call it is a f-type iteration.
If
1/2
()
kk k
pred s

(7)
then we add (,)
kk
l
to the filter set and update the filter
set, calling it is a h-type iteration .
()k
If the subproblem (3) is not compatible or
1/21 ,0,0,
k
 

 1
we also call it is a h-type iteration .
()k
Now we describe the detailed algorithm below:
Step 0 (Initialization)
Give ,,,,(0,1), ,0,


min ,: , 0k0
set the initial filter set .
o
Step 1: Solve subproblem (3) to get an attempt step s.
Step 2: If the solution of (3) is not compatible or
1/21 ,0,
k
 
 1
Then add (,)
kk
l
to the filter set and update the filter
set, let: 11
(),(, )(,)
kkkk
xx txt

1
ki
im
tmaxf
 k
go to Step 1;
If s
, stop; else go to step 3.
Step 3: If (4) fails, then , go to Step 1; else
go to Step 4. :0.5 
Step 4: If (5) holds but (6) fails, th en , go to
step 1, else go to Step 6. :0.5 
Step 5: If (5) fails, add (,)
kk
l
to the filter set and
update the filter to
k1k
, else go to Step 6.
Step 6: :
k
s
s
, 11
:,: )
x
kkkk 1
,
t
kkkk
(
x
xst

  ts s

 
k
H
to 1k
H
,min

update . , go to Step 1.
rk 2 totep 5 arcalled inn
tio
sic Assumptions and Lemmas
A1: All iterations
:1kk
Rema . Step Se er circle itera-
n steps, while Step 1, Step 6 are called outer circle
steps.
. Ba3
irst we make the following assumptions: F(,)
kk
x
t remain in a close and
Copyright © 2011 SciRes. AM
Q. ZHAO ET AL.
1374
bounded convex set .
A2: The functions ()
i
f
x are twice continuously dif-
ferentible.
A3: The matrix k
H
is bounded, kH
H
M.
Lemma 1: If thtion of the su
then (,) e solubproblem (3) is s =
0, kk
x
t is KKT point of p. troblem (2)he
Proof: It is not difficult to get the result by using the
first order necessary optimality conditions.
Lemma 2: All the elements (,)
j
j
l
in the filter set
k
satisfy 0
j
.
Proof. Suppose, by contradictit the result is not
e, then thuson, tha
truere mit a t ex(,)
j
jk
l
and 0
j
,
which means (,)
j
j
x
t is a feasible solution of the pro-
blem (2), and
21
1,0
T
jj
h
 
 
; but 0
j
s
j
so
1/2
0
j j

()
jj
pred spredsed on tate ()
j
s
of the filter set,
, ba
mechanism
he upd
(,)
j
j
l
radicti can’t be added to
on.Thus, the Lemma the filter set, which is a cont
olds is sufficiently large enough:
is proven.
Lemma 3: If {}
k
l is bounded below and one of the
following hwhen k
10()1
max
kkr
rmk


 
()1
11 ,
0
, ,
mk
kkkkr
rkr
ll
 
 


max l
then lim 0
kk
 .
Proof. We consider te following two cases:
e 1: h
ma
Cas 10()1
r
rmk
x ,
kk



ently larfo ough. r k suffici
Let ((lk ge en
ax
0()1rm
k
))m,(( )1( ))
kr k mklkk


th
en
11
1)1 0()
1
(( 1))axmax
(()),)(())
0(
m
max(


 
 
 

kr kr
k rmk
k
lk
lklk
rm
))
which implies ((lk
converges.
By (()))1)),(0,1)lk k(((ll
 
 we get
((lk)) 0,k

S
o 1(())0
klk

1k
ge en
th
.
Case 2: )1
1k
l
(
,
0
max ,,
mk
k krkr
r
l
 


l
fo ently larough.
will showat, for all
1
k
r
r
r k suffici
First we 1k
1k
0
1
krk
r
ll 0
l




(8)
We prove (8) by inction.
When 1
du
ll1k, 10 10
l



, assume
ho ove it also holds for
1
 that (8)
lds for 1, 2,, k, we want to pr
1. k
We consider the following two cases:
(a),
0
max ,kkkrk
rr
lll

()1mk

110
1
k
kkkrk
r
ll l


 
(b) r
()1 ()1
,,
00
max, mk mk
kkkrkrr
rk
r
lll





Let () 1pmk
, then
k
k




1
0
1
,0 1
01
11
,0 01
1
12
,1 01
1
1
,0 1
,0 ,
0
p
ktk
t
pkt
ktrktk
tr
kp k
krrk
rrkp
kp k
krr
rrkp
kp
kprkp
r
p
kt kt
tt
l
l
l
l
l
l


1,ktk
l
  
 

 




 

 




 
 
 





1
10
,1
0
kp p
r
r
p
ktk tk
t






By the fact that
1
,,
01,, 0
p
kr krr
t


1
1
01
1
11
01
1
10
 
 
 


 
 
 

kp k
rrkp
k
rk
r
kk
rr r
r
l
l

krk
ll r
So (8) is true. Since is bounded below, so
Thus the Lemma is proven.
Some assumptions are needed for Lagrange multiplier
es
{}
k
l
1,0,
rk
rk
 
 

timates ()
k
s
.
A4: There exits constants ,0
yL
MM, all of the La-
grange mu()
k
s
satisfies:
ltiplier estimates
0()
kiy
,1,2,,
s
Mi m

2
()(, )
kkk L
ii
shxtsM

Lemma 4: Under the assumptions A1-A4e have , w

2224
2
1
(,)( 1)
4
kk f
hxt smnM
 (9)

22
224 4
1
(,),()(1)
4
:
kk kf
L
xt s smnM
mM M


4


(10)
Proof. Use the Taylor Expansion:
22
()
() ()
xt
ik kk ik1
11
() ( 1)
22
T
ik
xT x
ik f
fx
f
xstst fx
 s
sf
ysnM





Copyright © 2011 SciRes. AM
Q. ZHAO ET AL.1375
which implies (9) holds.

2
2
24
(,) ,()(,)
(()(((,)))
kk kkk
T
kkk
xt s shxt s
shxtsM

 

where 2222
1(1)
4
L
f
M
mMmn M
 which implies (10)
holds. Thus, the Lemma is proven.
4. The Well Definedness of the Algorithm
Lemma 5: Let (,)
min
jj k
kj
l
, when 4k
M

 ,

(,) ,()
kk k
x
tss
can be accept to .
k
Proof. From the definition of k
, Use the result of
(10), when 4k
M

have , we
so

(,) ,()max(,),
kkkr rkrk
xt s sl


 ,
0()1
k k
rm
k


(,) ,()
kk k
x
tss
can be accepted to .
k
Thus, the oven.
Lemma 6:umptions
Lemma is pr
Under the assA1-A4, if (,)xt
oblem )
(,)
is a feasible point ,but not a KKT point of the pr
ere must exit a (2
then thneighborhood N of**
x
t, and
constants
,ˆ
,ˆ
>0
e satisfying (11), the sub
for all (,)
kk
xt N
, and all
ave a so-thproblem must h
lution, and
11
2
k

,
1/2 21/2ˆ
ˆ()
kk
 
 (11)
1/2
1
() 3
kk
pred s

 (12)
()()
l
k
rared spred s
k
Proof. From the results of Lemma7 in [13], there must
exit a neighborhood of
(13)

()(,),
l
kkk
raredsx ts
 
()s (14)
N
**
(,)
x
t, and some constants
,, 0

, for all and (,
kk
xt)N
2
kk
(,)hx t
 ,
s aion, athe subproblem (3) ha solutnd
2
() ((, ))
kk
k
ared sh x ts
1, ()(),
2
kkk
predared ss


(15)
next we will prove, there must exit , when (11)
holds, then
()spred
ˆˆ
,

11
2
k

 and (12)-(14) holds.
If 11
2
k


1/2
(,)
T
kkk k
hx t
and the results of (15) we can deduce
1/2
, )()
kkkk
prex tpreds
1/2
k
() (
11
T
kk
d sh

36

If

 
, then
11
122
k


 , from
1/2
)
k
1(
6
then
1/2
1
()(,)3
T
kkkk
predsh xt
k


So if we take
1
1
ˆ,,6
1
max


then
1/2
1
()
k
pred s3k

 , (12) holds.
From the definition of
()
l
k
rareds

2
()()()(, ))
()() (
, ))
( 1)
2

1
()
 
 

lTT
kkkkkkk
TT
kk kk
T
kk f
rared sared shs hxts
ared shts
hMmnM
and by Taylor Ex p ansion we kn ow
 
kk
s hx

ky
ared s
2
1
()()( 1)
2
kk
ared spred snMH
.
So we can deduce
2
() ()
lT
kkkk
rared sared sh

2
11
(1) (1)
22
(1 )()
11
(1)(1)
22
1
() (1)
3
11
(1)(1)
22
Hy f
k
Hy f
k
Hy f
nMMmnM
pred s
nMMmnM
pred s
nMMmnM


()
k
pred s
 




 




 


By simple calculation we know that if
1
1
(1) 3:
11
(1)(1)
22
Hy f
nMMmnM


 
()()
l
kk
rared spred s
then
4
()(, ),()
1
3
l
kkk
raredsxtss
M
 
 

 
k
Copyright © 2011 SciRes. AM
Q. ZHAO ET AL.
1376
if we take So
1
3
2
1
3:
M


 




then take
then (13)-(14) holdThus, the Lemma
Lemma 7: Under the assumptions A1-A4, the inner
on terminates finitely. a KKT
point, other

()(, ),()
l
kkkk
raredsx tss
 
.
From the dis cu s s ab o ve we kn o w if we

12
ˆmax,

is proven. s.
iterati
Proof. If 0s, the algorithm terminates at
if the inner iteration does not terminate
finitely, then the rule for d ecreasing ensures
wise,
0.
We consider the following two cases:
Case 1: 0
k
When 0 , 1/2 1
k

 cannot hold, so the al-
gorithm will enter a restoration phrase at step 2 and ter-
minates finitely.
Case 2: 0
k
This means (,
kk
xt) is a feasible
po f satisfie
int.
When , is

0

1
1/2 21/24
ˆ
ˆ
0min,
kM
 


k k


the result of Lemma 6, the subproblem (3) must
beible a2). From Lemma 5 and
(14) we know that
From
compatnd (1-(14) holds
(,)
kk
x
ts can be accept to
(,)
kkk
l
. So all thitions for f-type step are
satisfied and the inner iteration terminates successfully.
Lemma 8:
e cond
5. Global Convergence
Under the assumptions A1-A4, if the algo-
rithm does not terminate finitely, and there are infinite
oints added to the filter set, then lim 0
pkk
Proof. Let
()1
max,max, mk
ll
 

,
0
0(
)1
kr kkkrkrk
t
rmk 
 
We consider the following two cases:
Case 1: 1,
k
l
k
om the resultk sufficiently la then
frwe know


for krge,
s of Lemma 3, lim 0
kk
a subset
Case 2:ue, define as
follows: the first element of is the fiex
w
if Case 1 is not tr1
k1
rst ind
hich satisfies
j
k

, and the next element k
is
thhie index w,
jk
ch satisfies jk
m Ca


. From the
definition of the filter set, we can deduce that:
1
,
kkk
ll k

From the proof of the Lema 3, se 2, we know that
1
0,
kk

 
lim
.
For kkj
, it is obviously jk
0

.
Thus, the Lemma is proven.
Theorem 1: Under the assumptions A1-A4, if the
al
an accumulation point which is a KKToint.
Proof. We discuss it in two cases:
se 1: theree iterations.
gorith-m doesn’t terminate finitely,then there must exit
p
1) Ca are infinite h-typ
From Lemma 8 we know lim 0
kk
, from the up-
date mechanism of the filter set, there must exit a subset
S, 1,
kkk
kS

.
Without loss of generality,we can assume that
**kSk k
lim(,)(, )
x
txt
from which we know **
(,)
x
tpoint.
If, suppose, by contradict is a feasible
ion , **
(,)
x
t
point , then from Lemmas 6 and 7, we know that if:
is not a KKT


kk k
1
1/2 21/24
ˆ
ˆmin ,:M
 

 


. (15)
ns foried and
sfully
Then all the cond itio f-type step are satisf
the inner iteration terminates succes.
Because 0
kk
, the upper bound
will be
1/2 21/2
greater than twice the lower bound .
Initially a value
ˆ()
kk
 
min
  is chosen, succes
halving sively
will evcate a value in the in-
te entually (a) lo
rval (15), or (b) locate a value to the right of this inter-
val. It is obviously that
()0pred s u
k
k
om the of s, if (b) is true, note that
nder case (a).
Frptimality o
creases, sowe
()preds is nondecreasing if
k
know
in
() 0
k
preds , which means a f-typ e iteration will
occur, and it is a contradiction.
2) Case 2: there are only iterations.
That means for kK sufficiently large, no filter en-
tries are made and
finite h-type
() ()
lT
kkk
rared spred
()
l
k
rared s
ks h


1/2
k

.
Similar to the proof of Lmma 3 we can deduce: e
1()
ma
k
Km
K
l1
min(, )1/2
xkrM
k
rrK
rK
l
k



r
yh
Because is bo unded below, we know that

min(, )
1()
max kkrMT
kKmKrKr rr
rK
ll pred



{}
k
l
() ()0,
0, .
kk
k
k
k
predspred sh
k
T
Without loss of generality,we can assume that
Copyright © 2011 SciRes. AM
Q. ZHAO ET AL.
Copyright © 2011 SciRes. AM
1377
476. doi:10.1007/BF00939377
**
lim(,)(, )
kk
x
txt from which we know **
(,)
x
t
**
(,) is a
se, on the contrary, feasible point.If, suppo
x
t is not
a KKT point .Note that for ,kK
kK
 from Lemma
know that
6 and Lemma 7 we when


1
1/2 21/24
ˆ
ˆmin ,:
kk K
M
 

 


then te l
[6] Y. Xue, “A SQP Method for Minimax Problems,” in
Chinese, Journal of System Science and Math Science,
Vol. 22, No. 3, 2002, pp. 355-364.
[7] R. Fletcher and S. Leyffer, “Nonlinear Programming
without a Penalty Function,” Mathematical Programming,
Vol. 91, No. 2, 2002, pp. 239-269.
doi:10.1007/s101070100244
he iteration is f-type and terminates successfully.
Because the right side is a constant, and theft side
tends to zero, the upper bound
will be greater than
twice the lower bound

1/2 2
ˆk

en in-
[8] R. Fletcher, N. I. M. Gould, S. Leyffer, P. L. Toint and A.
Wächter, “Global Convergence of a Trust-Region SQP
Filter Algorithm for General Nonli
SIAM Journal on Optimizationear Programming,”
n, Vol. 13, No. 3, 2002, pp.
1/2
k
, and wh
ner iteration terminates we must have 635-659. doi:10.1137/S1052623499357258
[9] R. Fletcher, S. Leyffer and P. L. Toint, “On the Global-
Convergence of a Filter-SQP Algorithm,” SIAM Journal
on Optimization, Vol. 13, No. 1, 2002, pp. 44-59.
doi:10.1137/S105262340038081X
1ˆ
min, 2

min

 

so
11
ˆ
() min,
32
min
k
pred s




,this is a contradiction [10] A. Wächter and L. T. Biegler, “Line Search Filter Meth-
ods for Nonlinear Programming: Motivation and Global
Convergence,” SIAM Journal on Optimization, Vo
No. 1, 2005, pp. 1-31.
Th
6. References
C. Charalambous and A. R. C
to Solve the Minimax Problem
on Numerical Analysis, Vol. 15, No. 1, 1978, pp. 162-
187. doi:10.1137/0715011
l. 16,
1052623403426556doi:10.1137/S
to
() ()predspred sus, the theorem 0.
T
kkk
kh
is proven. [11] A. Wächter and L. T. Biegler, “Line Search Filter Meth-
ods for Nonlinear Programming: Local Convergence,”
SIAM Journal on Optimization, Vol. 16, No. 1, 2005, pp.
32-48. doi:10.1137/S1052623403426544
[1] onn, “An Efficient Method
Directly,” SIAM Journal [12] R. Fletcher, S. Leyffer and P. L. Toint, “A Brief History
of Filter Methods,” SIAG/OPT Views and News, Vol. 18,
No. 1, 2006, pp. 2-12.
[13] L. P. Huang, “A Filter Method for Minim
di, “New Minmax Algorithm,” Journal of Optimi-
zation Theory and Applications, Vol. 75, No. 3, 1992, pp.
0.1007/BF00940496
ax Problems,”
erlinear Convergence of Trust Re-
. Pu, “A Nonmonotone Filter Trust Re-
[2] A. Varin Chinese, Post Graduate Thesis, Suzhou University,
Suzhou, 2009.
[14] S. Ulbrich, “On the Sup
613-634. doi:1
] Z. B. Zhu, “An Improved SQP Algorithm for Solving gion SQP-Filter Algorithm,” Mathmatical Programming,
Vol. 100, Series B, 2004, pp. 217-245.
[15] K. Su and D. G
[3 Minimax problems,” Applied Mathematics Letters, Vol.
22, No. 1, 2009, pp. 464-469.
doi:10.1016/j.aml.2008.06.017
[4] Y. H. Yu and L. Gao, “Nongion Method for Nonlinear Constrained Optimization,”
Journal of Computational and Applied Mathematics, Vol.
22, No. 1, 2009, pp. 230-239.
monotone Line Search Algo-
rithm for Constrained Minimax Problems,” Journal of
Optimization Theory and Applications, Vol. 115, No. 2,
2002, pp. 419-446. doi:10.1023/A:1020896407415 doi:10.1016/j.cam.2008.01.013
[16] R. Fletcher, S. Leyffer and C. G. Shen, “Nonmonotone
Filter Method for Nonlinear Optimization,” Argonne Na-
tional Laboratory, Lemont, 2009
[5] J. L. Zhou and A. L. Tits, “Nonmonotone Line Search
Method for Minimax Problems,” Journal of Optimization
Theory and Applications, Vol. 76, No. 3, 1993, pp. 455- .