Advances in Pure Mathematics
Vol.08 No.04(2018), Article ID:83819,8 pages
10.4236/apm.2018.84022

*The research was supported by National Natural Science Foundation of China under the grant 11571221.

An Efficient Proximal Point Algorithm for Unweighted Max-Min Dispersion Problem*

Siqi Tao

Department of Mathematics, College of Sciences, Shanghai University, Shanghai, China

Copyright © 2018 by author and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: March 12, 2018; Accepted: April 15, 2018; Published: April 18, 2018

ABSTRACT

In this paper, we first reformulate the max-min dispersion problem as a saddle-point problem. Specifically, we introduce an auxiliary problem whose optimum value gives an upper bound on that of the original problem. Then we propose the saddle-point problem to be solved by an adaptive custom proximal point algorithm. Numerical results show that the proposed algorithm is efficient.

Keywords:

Maximum Weighted Dispersion Problem, Adaptive Custom Proximal Point Algorithm, NP-Hard

1. Introduction

Consider the following weighted max-min dispersion problem:

max x χ { f ( x ) : = min i = 1 , , m ω i x x i 2 } , (1)

where χ = { y n | ( y 1 2 , , y n 2 ,1 ) T K } , K is a convex cone, x 1 , , x m n

are m given point, ω i > 0 for i = 1 , , m and denotes the Euclidean norm. Let ν ( P χ ) denote the optimal value of the problem (1). The problem aims to find a point x in a closed set χ that is furthest from a given set of points x 1 , , x m in n in a weighted max-min sense. It has wide applications in spatial management, facility location, and pattern recognition (see [1] [2] [3] [4] and references therein). In the equal weight case, i.e., ω 1 = = ω m , (1) has the geometric interpretation of finding the largest Euclidean sphere with center in P b o x and enclosing no given point.

Without loss of generality, we assume that ν ( P χ ) > 0 . The weighted max-min dispersion problem is known to be NP-hard in general, even in the case of equal weights and χ = [ 1 , 1 ] n [5] or χ = { x | x 1 } [6] . We denote the two special cases by P b o x and P b a l l , which correspond to setting K = { y n + 1 | y j y n + 1 , j = 1 , , n } and K = { y n + 1 | y 1 + + y n y n + 1 } respectively.

In the low-dimensional cases of n 3 and χ being a polyhedral set, this problem is solvable in polynomial time [4] [7] . For n > 4 , heuristic approaches have been proposed [1] [4] .

In paper [5] , they use an optimal solution of convex relaxations from semidefinite programming (SDP) and second order cone programming (SOCP) to construct an approximate solution of (1), and prove an approximation bound

of 1 O ( l n ( m ) γ * ) 2 , where γ * depends on χ. When χ = { 1,1 } n or χ = [ 1,1 ] n , γ * = O ( 1 n ) . This is the first nontrivial approximation bound for a convex relaxation of (1). Wang and Xia [6] then focus on the study of P b a l l and show the approximation bound of their algorithm is 1 O ( ln ( m ) / n ) 2 based on a linear programming relaxation.

In this paper, we focus on the equal weight max-min dispersion problem, which is called by “max-min dispersion problem” for simplicity. Firstly, we model the max-min dispersion problem as a saddle point problem, and then we adopt an adaptive custom proximal point algorithm to obtain a ε-approximation scheme1.

The remainder of the paper is organized as follows. In Section 2, we reformulate max-min dispersion problem as a saddle point problem. In Section 3, we propose a new adaptive custom proximal point algorithm to approximately solve the saddle point problem and establish the convergence analysis. Section 4 presents some numerical comparisons between our proximal point algorithm and SDP-based algorithm. Conclusions are made in Section 5.

2. Saddle Point Model

Without loss of generality, we drop the weight parameters ωi from the objective function, since all the ωis are equal. In the following of this paper, we consider the problem:

max x χ { f ( x ) : = min i = 1 , , m x x i 2 } . (2)

Note that, it has been proved that this problem is NP-hard in general [5] [6] . Denote Δ m by the unit simplex in m , that is, Δ m = { x m | x 0 , e T x = 1 } with e being the all one vector, then (2) is equivalent to the following saddle point problem:

max x χ m i n y Δ m i = 1 m y i x x i 2 . (3)

ϕ ( x , y ) = i = 1 m y i x x i 2 = i = 1 m ( y i x 2 2 y i ( x i ) T x + y i x i 2 ) = 2 ( A y ) T x 2 y T b + i = 1 m y i x 2 = 2 ( A y ) T x 2 y T b + γ x 2 ,

where A = [ A 1 , , A m ] n × m , A i = x i and b = ( 1 2 x 1 2 , , 1 2 x m 2 ) T , γ = i = 1 m y i = 1 . ϕ ( x , y ) is convex for x and concave for y separately, although the saddle point model is neither convex nor concave.

Define g ( x ) = min y Δ m ϕ ( x , y ) , and let x * , y * be the optimal saddle point of objective (3). Note that x * is also necessarily a minimizer of g ( x ) and

g ( x * ) = max x χ min y Δ m i = 1 m y i x x i 2 . Now it suffices for us to find a point x such that g ( x ) g ( x * ) ε , because such an x is necessarily a ε-approximate solution to (3).

However, ϕ ( x , y ) is not strongly concave with respect to y. Furthermore, define the regularized saddle point problem

max x χ min y Δ m { ϕ λ ( x , y ) : = 2 y T A T x 2 y T b + γ x 2 λ y } , (4)

So ϕ λ ( x , y ) is λ-strongly concave on y and γ-strongly convex on x.

Denote the optimal solution of (4) by ( x , y ) . The relation between the optimal value of (3) and that of (4) can be characterized in the following lemma.

Lemma 1. g ( x * ) g ( x ) ε / 2 if λ ε 2 .

Proof. Denoting y ˜ = a r g m i n y Δ m ϕ ( x , y ) , we have

g ( x ) = ϕ ( x , y ˜ ) = ϕ λ ( x , y ˜ ) λ y ˜ ϕ λ ( x , y ) λ y ˜ ϕ λ ( x * , y ) λ y ˜ = ϕ ( x * , y ) + λ y λ y ˜ ϕ ( x * , y * ) + λ y λ y ˜ = g ( x * ) + λ y λ y ˜ g ( x * ) λ y .

Since when y 1 = 1 , y 2 = = y m = 0 , y max = 1 , and we then have g ( x * ) g ( x ) λ y ε 2 .

3. Adaptive Custom Proximal Point Algorithm

In this section, we adopt an adaptive custom proximal point (ACPP) algorithm to solve (4), which is quadratic and then can be approximately solved in a short time. From the optimal conditions of the problem and the convexity of related functions, the (4) can be solved by the followed by the variational inequality: for x , y

y y + ( x x y y ) T ( 2 y T A + x 2 A T x 2 b ) 0

And we denote

u = ( x y ) , u = ( x y ) , F ( u ) = ( 2 y T A + x 2 A T x 2 b ) , Ω = R n × R m ,

then the variational inequality can be reduction to: find the solution u Ω , satisfy:

y y + ( u u ) T F ( u ) 0, (5)

It’s easy to verify that F is monotonous, so (5) is monotonous, and then the solution set is not empty.

We denote

M = ( t I n A T θ A s I m ) = ( t I n + ( 1 θ ) 1 s A T A A T θ A s I m ) ( I n 0 ( θ 1 ) 1 s A I m ) = H M ˜ , θ [ 1 , 1 ] (6)

We give the details of the (ACPP) method as in Algorithm 1.

Algorithm 1. A1: ACPP algorithm for the unweighted max-min dispersion model.

In mathematics, the arguments of the minimum (abbreviated arg min or argmin) are the points of the domain of some function at which the function values are minimized.

Convergence Analysis

We present a convergence theorem for A1 in this section. In order to proof our theorem, we now give some lemmas. The following lemmas 2-4 are standard results in [8] [9] [10] .

Lemma 2. For H and M in (6), assume s > 0 , t > 0 , then the follow inequation is establish:

t s > 1 4 ( 1 + θ ) 2 λ max ( A T A ) . (13)

where H and 1 2 ( M + M T ) is positive definite matrix.

Lemma 3. u ˜ k is the solution of (7), and M is defined in (6), then for u , we have

( u k u ) T M ( u k u ) ( u k u ˜ k ) T M ( u k u ˜ k )

Lemma 4. For M ˜ and H in (6), there exist a constant c 0 > 0 , can make { u k } in (8) satisfy:

u k + 1 u H 2 u k u H 2 γ ( 2 γ ) α k c 0 u k u ˜ k 2

Now we can give the theorem of the ACPP algorithm.

Theorem 1. The ACPP algorithm is a shrinkage algorithm of the saddle point problem (4), and the sequence { u k = ( x k , y k ) } generated by the algorithm convergence to a solution of (4).

Proof. For M R n × n , there exist a constant c 0 , we have d T M d c 0 d 2 , d R n , when the inequation is hold, the α k has a lower bound:

α k = c 0 u k u ˜ k 2 M ˜ ( u k u ˜ k ) H 2 c 0 M ˜ T M .

On the basis of lemma 4, we have

u k + 1 u H 2 u k u H 2 γ ( 2 γ ) c 0 2 M ˜ T M u k u ˜ k 2

when γ = 1 , ACPP algorithm is a H-norm shrinkage algorithm of the saddle point problem (4).

4. Numerical Results

In this section, we do some simple numerical comparisons. All the Numerical texts are implemented in Matlab R2014a and run on a laptop with 2.30 GHz processor and 4 GB RAM. We are now ready to apply Algorithm 1 to our model (4), which is shown in detail in Algorithm 2.

We present the numerical comparison between our ACPP algorithm and SDP-based algorithm proposed in [6] for solving P b a l l ( χ = { x | x 1 } ). We note that when the weighted ω i = 1 in [6] , the two algorithm can comparable. We do numerical experiments on 24 random instances of dimension n = 5 , where the number of input point m varies from 6 to 30. All the input points x i ( i = 1 , , m ) with m = 6 , , 30 orderly form an n × 450 matrix. We randomly generate this matrix using the following Matlab scripts:

Rand (‘state’, 0); X = 2 * rand (n, 450)-1;

where the rand() is a random function that produces a random number between 0 and 1. We set ε = 10 3 , λ = ε / 2 , and report the numerical results in Table 1.

Algorithm 2. A2: ACPP algorithm for max-min dispersion problem.

Table 1. Numerical results for n = 5, m = 6 to 30.

The columns cvx_opt present optimal objection function values of the 20 instance of P b a l l [6] . The next two columns present the statistical results over the 10 runs of the algorithm proposed in [6] and our ACPP algorithm, respectively. The subcolumns v max , v min and v ave give the best, the worst and the average objective function values found among 10 tests, respectively. The results show that compared with the SDP-based algorithm our algorithm is competitive in most cases.

From the table, we can see that the solution of our algorithm is very close to the exact solution of the second column, which is better than the SDP algorithm.

5. Conclusion

In this paper, we reformulate the max-min dispersion problem as a saddle point problem and then adopt an adaptive custom proximal point algorithm to obtain an approximation scheme. It can be proved that the proposed algorithm produces a ε-approximation solution to the max-min dispersion problem with equal weight. Numerical results show that the proposed algorithm is efficient.

Cite this paper

Tao, S.Q. (2018) An Efficient Proximal Point Algorithm for Unweighted Max-Min Dispersion Problem. Advances in Pure Mathematics, 8, 400-407. https://doi.org/10.4236/apm.2018.84022

References

  1. 1. Dasarthy, B. and White, L.J. (1980) A Maximin Location Problem. Operations Research, 28, 1385-1401.
    https://doi.org/10.1287/opre.28.6.1385

  2. 2. Johbson, M.E., Moore, L.M. and Ylvisaker, D. (1990) Maximin Distance Designs. Journal of Statistical Planning and Inference, 26, 131-148.
    https://doi.org/10.1016/0378-3758(90)90122-B

  3. 3. Schaback, R. (1995) Multivariate Interpolation and Approximation by Translates of A Basis Function. World Scientific, Singapore, 491-514.

  4. 4. White, D.J. (1996) A Heuristic Approach to a Weighted Maxmin Disperation Problem. IMA Journal of Management Mathematics, 7, 219-231.
    https://doi.org/10.1093/imaman/7.3.219

  5. 5. Haines, S., Loeppky, J., Tseng, P. and Wang, X. (2013) Convex Relaxations of the Weighted Maxmin Dispersion Problem. SIAM Journal on Optimization, 23, 2264-2294.
    https://doi.org/10.1137/120888880

  6. 6. Wang, S. and Xia, Y. (2016) On the Ball-Constrained Weighted Maximin Dispersion Problem. SIAM Journal on Optimization, 26, 1565-1588.
    https://doi.org/10.1137/15M1047167

  7. 7. Ravi, S.S., Rosenkrantz, D.J. and Tayi, G.K. (1994) Heuristic and Special Case Algorithms for Dispersion Problems. Operations Research, 42, 299-310.
    https://doi.org/10.1287/opre.42.2.299

  8. 8. He, B. and Yuan, X. (2010) A Contraction Method with Implementable Proximal Regularization for Linearly Constrained Convex Programming. Optimization Online, 1-14.

  9. 9. He, B., Fu, X. and Jiang, Z. (2009) A Proximal Point Algorithm Using a Linear Proximal Term. Journal of Optimization Theory and Applications, 141, 209-239.
    https://doi.org/10.1007/s10957-008-9493-0

  10. 10. Martinet, B. (1970) Regularisation dinequations variationelles par approximations succesives. Rev. Francaise dInform. Recherche Operation, 4, 154-159.

NOTES

1We call g ( x ) is the ε-approximation of g ( x * ) if g ( x ) g ( x * ) ε .