Advances in Pure Mathematics
Vol.09 No.04(2019), Article ID:91718,7 pages
10.4236/apm.2019.94015

An Efficient Random Algorithm for Box Constrained Weighted Maximin Dispersion Problem

Jinjin Huang

Department of Mathematics, Shanghai University, Shanghai, China

Copyright © 2019 by author(s) and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: March 13, 2019; Accepted: April 8, 2019; Published: April 11, 2019

ABSTRACT

The box-constrained weighted maximin dispersion problem is to find a point in an n-dimensional box such that the minimum of the weighted Euclidean distance from given m points is maximized. In this paper, we first reformulate the maximin dispersion problem as a non-convex quadratically constrained quadratic programming (QCQP) problem. We adopt the successive convex approximation (SCA) algorithm to solve the problem. Numerical results show that the proposed algorithm is efficient.

Keywords:

Maximin Dispersion Problem, Successive Convex Approximation Algorithm, Quadratically Constrained Quadratic Programming (QCQP)

1. Introduction

The weighted maximin problem model with box constraints is as follows:

max x χ { f ( x ) : = min i = 1 , , m ω i x x i 2 } (1)

where χ = { y R n | ( y 1 2 , , y n 2 , 1 ) T κ } , κ is a convex cone; x 1 , , x m R n are m given points; these m points are equivalent to m locations; ω i > 0 for i = 1 , , m and denotes the Euclidean norm. In our numerical calculation, ω i is equal to 1. The goal is to find a point in a closed set χ = [ 1 , 1 ] n such that the minimum of the weighted Euclidean distance from given set of points x 1 , , x m in R n is maximized. The weighted maximin problem has been widely used in spatial management, facility location, and pattern recognition.

The weighted maximin dispersion problem with box constraints is known to be NP-hard in general [1] . For the low-dimensional cases ( n 3 and χ being a polyhedral set), it is solvable in polynomial time [2] [3] . For n > 4 , a heuristic algorithm [2] [4] is used to solve this problem.

In paper [5] , they look for approximate solution through convex relaxation,

and prove that the approximation bound is 1 O ( ln ( m ) γ ) 2 , where γ depends on χ , when χ = { 1 , 1 } n or χ = [ 1 , 1 ] n , γ * = O ( 1 n ) . In paper [1] , they

use the linear programming relaxation method to give the approximate bounds of the ball problem, which is 1 O ( ln ( m ) / n ) 2 . In paper [5] , they consider the problem of finding a point in a unit n-dimensional l p - ball (p ≥ 2) such that the minimum of the weighted Euclidean distance from given m points is maximized. They show in paper [6] that the SDP-relaxation-based approximation algorithm provides the first theoretical approximation bound of 1 O ( ln ( m ) / n ) 2 .

In this paper, firstly, we model the maximin dispersion problem as a Quadratically constrained quadratic programming (QCQP), noting that (1) is a non-smooth, non-convex optimization problem, because the point-wise minimum of convex quadratics is non-differentiable and non-concave. We solve this problem with a general approximation framework, which is successive convex approximation (SCA), which can be summarized as follows: each quadratic component of (1) is locally linearized at the current iteration to construct its convex approximation function, so we obtain a convex subproblem. The solution of each subproblem is then used as the point about which we construct a convex surrogate function in the next iteration, repeat the steps, and then adopt the random block coordinate descent method (RBCDM) to obtain the solution of subproblem.

The remainder of the paper is organized as follows. In Section 2, we give technical preliminaries. In Section 3, we first reformulate maximin dispersion problem as a QCQP problem. Then, we describe the overall SCA approach and use the proposed methods (RBCDM) for solving each subproblem. In Section 4, we present some numerical results. Conclusions are made in Section 5.

2. Technical Preliminaries

The following concepts or definitions are adopted in our paper.

Ÿ We use R n to denote the space of n dimensional real valued vectors, and x R n , we denote the ith component of x by x i . Thus, each x R n is a column vector

x = ( x 1 x 2 x n )

Ÿ Let y R n and χ = [ 1 , 1 ] n be a set, then the distance of the point y from the set χ is defined as

d ( y , χ ) = inf x χ x y 2

3. Algorithm of Generation

We now reformulation (1) into the following equivalent form,

max x χ min i = 1 , , m ω i x x i 2 min x χ max i = 1 , , m ω i x x i 2 , (2)

and we finally obtain

min x χ max i = 1 , , m ω i x x i 2 , (3)

and we will work with this formulation, note that the problem still remains non-convex.

Our Algorithm

We first introduce our algorithm ideas. First, we construct a convex optimization function of the non-convex objective function (3) by locally linearizing each quadratic component of (3) about the iterate point x ( r ) , we obtain a n-dimensional convex subproblem. Second, we adopt random block coordinate descent method (RBCDM) to transform the n-dimensional convex subproblem into one-dimensional convex subproblem to reduce the computational complexity, here, the optimization variables be decomposed into n independent blocks. At each iteration of this method, random one of the components of variable is optimized, while the remaining variables are held fixed, until all components of a variable are updated, remember as a round, repeat the above steps until we achieve the effect we want. Such block structure can lead to low-complexity algorithms. Finally, to solve the one-dimensional subproblem.

Defining f ( x ) : = max i = 1 , , m u i ( x ) , where u i ( x ) : = ω i x x i 2 , i = 1 , , m . Since u i ( x ) is concave for i = 1 , , m , on locally linearizing u i ( x ) about the current iterate point x = x ( r ) , we can obtain a global upper-bound of original objective f ( x ) . At the point x = x ( r ) , we construct a convex approximation function of f ( x ) at x = x ( r ) as follows:

u i ( x ) u i ( x ( r ) ) + u i ( x ( r ) ) T ( x x ( r ) ) = 2 ω i ( x i x ( r ) ) T x + ω i ( ( x ( r ) ) T x ( r ) ( x i ) T x i ) = ( x ( r ) ) T x + d i ( r ) ,

where c i ( r ) = 2 ω i ( x i x ( r ) ) , d i ( r ) = ω i ( ( x ( r ) ) T x ( r ) ( x i ) T x i ) , for i = 1 , , m .

Define v ( x , x ( r ) ) : = max i = 1 , , m ( c i ( r ) ) T x + d i ( r ) , the piecewise linear function v ( x , x ( r ) ) is an upper bound of the original function f ( x ) at x = x ( r ) , which is tight at x = x ( r ) [7] . We replace f ( x ) with its piecewise linear approximation about x ( r ) to obtain the non-smooth, convex subproblem.

min x χ max i = 1 , , m ( c i ( r ) ) T x + d i ( r ) (4)

This subproblem is computationally expensive, so we transform the high-dimensional problem into one-dimensional problem to reduce the complexity.

The concrete steps are as follows: We random update the jth component x j of x at the current iterate point x ( r ) and keep the other components unchanged,it must be noted that the x j is a component of random selection, let x = ( x 1 ( r ) , , x j - 1 ( r ) , x j , x j + 1 ( r ) , , x n ( r ) ) T , so we have

v j ( x j , x ( r ) ) = max i = 1 , , m 2 ( x i x ( r ) ) T x + ( ( x ( r ) ) T x ( r ) ( x i ) T x i ) = max i = 1 , , m 2 ( ( x 1 i x 1 ( r ) ) , , ( x j i x j ( r ) ) , , ( x n i x n ( r ) ) ) ( x 1 ( r ) , , x j 1 ( r ) , x j , x j + 1 ( r ) , , x n ( r ) ) T + ( ( x ( r ) ) T x ( r ) ( x i ) T x i )

= max i = 1 , , m 2 ( x j i x j ( r ) ) x j + ( ( x ( r ) ) T x ( r ) ( x i ) T x i ) + l = 1 n 2 ( x l i x l ( r ) ) x l ( r ) 2 ( x j i x j ( r ) ) x j ( r ) = max i = 1 , , m a i ( r ) x j + b i (r)

where a i ( r ) = 2 ( ( x i ) j ( x ( r ) ) j )
b i ( r ) = ( ( x ( r ) ) T x ( r ) ( x i ) T x i ) + i = 1 n ( ( x i ) l ( x ( r ) ) l ) ( x ( r ) ) l 2 ( ( x i ) j ( x ( r ) ) j ) ( x ( r ) ) j .

obtain the one-dimensional convex subproblem

min x j χ * max i = 1 , , m a i ( r ) x j + b i ( r ) , (5)

In order to solve the solution of one-dimensional piecewise linear function (5), we first arrange the a i ( r ) of the m lines from small to large, i.e. a 1 ( r ) a 2 ( r ) a m ( r ) . For the convenience of description, we remember these m lines as y i = a i x + b i ( i = 1 , 2 , , m ) , where x = [ 1 , 1 ] . The following is the algorithmic frameworks for solving one-dimensional subproblem.

4. Numerical Results

In order to benchmark the performance of our proposed algorithms, we do some simple numerical comparisons. We do numerical experiments on 4 random instances when dimension n takes different values, respectively, such as n = 100, 500, 1000, 2000. The corresponding m we chose smaller than n, the same as n, and bigger than n. where all weights ω 1 , , ω m are equal to 1, all the numerical tests are implemented in MATLAB R2016a and run on a laptop with 2.50 GHz processor and 4 GB RAM.

All the input points x i orderly form an n × 45000 matrix. We randomly generate this matrix using the following matlab scripts:

Table 1. Algorithmic frameworks of subproblem.

Table 2. Numerical results.

r a n d ( s t a t e , 0 ) ; X = 4 r a n d ( n , 450 ) 2 ;

We report the numerical results in Table 1. The columns v ( C R ) present the optimal objective function values of convex relaxation [1] of the 26 instances. In [1] , they first reformulate (1) as an equivalent smooth optimization problem as following

max x , ζ ζ s . t . ω i ( x 2 2 ( x i ) T x + x i 2 ) x χ

product the following convex relaxation (CR) when χ = [ 1 , 1 ] n :

max x , ζ ζ s . t . ω i ( n 2 ( x i ) T x + x i 2 ) x χ

we solved it with CVX solver [8] .

The next column present the statistical results over the 1000 runs of the general algorithm proposed in [1] , the subcolumns “max”, “min”, “ave” and “time 1” give the best, the worst, the average objective function values and running time found among 1000 tests, respectively. The last column is the result of our algorithm, where we choose 0 vector as the initial point. Finally, add a rounding (i.e., if x h ( 0 ) 0 , then x h ( 0 ) = 1 , otherwise x h ( 0 ) = 1 , for h = 1 , , n for the solution x ( 0 ) obtained by the iteration. The subcolumns “f(x1)”, “f(x2)” and “time 2” represents the numerical result corresponding to no add rounding, add rounding and running time of our algorithm, respectively. Numerical results show that the effect of “f(x2)” is the best. Table 2 shows that the qualities of the solutions returned by our algorithm are generally higher than those obtained by the general algorithm in [1] .

5. Conclusion

In this paper, we reformulate the maximin dispersion problem as QCQP problem and the original non-convex problem is approximated by a sequence of convex problems. Then, we adopt the random block coordinate descent method (RBCDM) to obtain the solution of subproblem. Numerical results show that the proposed algorithm is efficient.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

Cite this paper

Huang, J.J. (2019) An Efficient Random Algorithm for Box Constrained Weighted Maximin Dispersion Problem. Advances in Pure Mathematics, 9, 330-336. https://doi.org/10.4236/apm.2019.94015

References

  1. 1. Wang, S. and Xia, Y. (2016) On the Ball-Consterained Weighted Maximin Dispersion Problem. SIAM Journal on Optimization, 26, 1565-1588.

  2. 2. White, D.J. (1996) A Heuristic Approach to a Weighted Maxmin Disperation Problem. IMA Journal of Management Mathematics, 7, 219-231. https://doi.org/10.1093/imaman/7.3.219

  3. 3. Ravi, S.S., Rosenkrantz, D.J. and Tayi, G.K. (1994) Heuristic and Special Case Algorithms for Dispersion Problems. Operations Research, 42, 299-310. https://doi.org/10.1287/opre.42.2.299

  4. 4. Dasarthy, B. and White, L.J. (1980) A Maximin Location Problem. Operations Research, 28, 1385-1401. https://doi.org/10.1287/opre.28.6.1385

  5. 5. Wu, Z.P., Xia, Y. and Wang, S. (2017) Approximating the Weighted Maximin Dispersion Problem over an l_p-ball: SDP Relaxation Is Misleading. Optimization Letters, 12, 875-883.

  6. 6. Haines, S., Loeppky, J., Tseng, P. and Wang, X. (2013) Convex Relaxations of the Weighted Maxmin Dispersion Problem. SIAM Journal on Optimization, 23, 2264-2294. https://doi.org/10.1137/120888880

  7. 7. Konar, A. and Sidiropoulos, N.D. (2017) Fast Approximation Algorithms for a Class of Nonconvex QCQP Problems Using First-Order Methods. IEEE Transactions on Signal Processing, 65, 3494-3509.

  8. 8. Grant, M. and Boyd, S. (2010) CVX User’s Guide: For CVX Version 1.21. User’s Guide, 24-75.